Feb 18 19:33:55 crc systemd[1]: Starting Kubernetes Kubelet... Feb 18 19:33:55 crc restorecon[4699]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:33:55 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:33:56 crc restorecon[4699]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:33:56 crc restorecon[4699]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 18 19:33:56 crc kubenswrapper[4932]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 19:33:56 crc kubenswrapper[4932]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 18 19:33:56 crc kubenswrapper[4932]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 19:33:56 crc kubenswrapper[4932]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 19:33:56 crc kubenswrapper[4932]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 18 19:33:56 crc kubenswrapper[4932]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.877524 4932 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.887136 4932 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.887351 4932 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.887453 4932 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.887544 4932 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.887632 4932 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.887724 4932 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.887818 4932 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.887966 4932 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.888076 4932 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.888256 4932 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.888356 4932 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.888445 4932 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.888555 4932 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.888653 4932 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.888746 4932 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.888835 4932 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.888935 4932 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.889030 4932 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.889119 4932 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.889254 4932 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.889351 4932 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.889440 4932 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.889527 4932 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.889625 4932 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.889716 4932 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.889804 4932 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.889901 4932 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.889994 4932 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.890087 4932 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.890217 4932 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.890317 4932 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.890406 4932 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.890512 4932 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.890608 4932 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.890711 4932 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.890803 4932 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.890892 4932 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.890982 4932 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.891075 4932 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.891165 4932 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.891437 4932 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.891537 4932 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.891630 4932 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.891720 4932 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.891809 4932 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.891898 4932 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.891987 4932 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.892115 4932 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.892260 4932 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.892358 4932 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.892448 4932 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.892537 4932 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.892626 4932 feature_gate.go:330] unrecognized feature gate: Example Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.892714 4932 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.892820 4932 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.892913 4932 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.893003 4932 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.893092 4932 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.893206 4932 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.893322 4932 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.893426 4932 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.893516 4932 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.893604 4932 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.893691 4932 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.893779 4932 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.893871 4932 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.893960 4932 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.894058 4932 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.894148 4932 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.894267 4932 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.894358 4932 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.895667 4932 flags.go:64] FLAG: --address="0.0.0.0" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.895820 4932 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.895972 4932 flags.go:64] FLAG: --anonymous-auth="true" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.896101 4932 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.896275 4932 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.896377 4932 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.896474 4932 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.896570 4932 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.896663 4932 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.896755 4932 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.896873 4932 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.896974 4932 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.897068 4932 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.897158 4932 flags.go:64] FLAG: --cgroup-root="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.897288 4932 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.897383 4932 flags.go:64] FLAG: --client-ca-file="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.897475 4932 flags.go:64] FLAG: --cloud-config="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.897565 4932 flags.go:64] FLAG: --cloud-provider="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.897675 4932 flags.go:64] FLAG: --cluster-dns="[]" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.897774 4932 flags.go:64] FLAG: --cluster-domain="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.897866 4932 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.897959 4932 flags.go:64] FLAG: --config-dir="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.898051 4932 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.898203 4932 flags.go:64] FLAG: --container-log-max-files="5" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.898330 4932 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.898446 4932 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.898554 4932 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.898651 4932 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.898743 4932 flags.go:64] FLAG: --contention-profiling="false" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.898834 4932 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.898937 4932 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.899033 4932 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.899134 4932 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.899277 4932 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.899379 4932 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.899470 4932 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.899563 4932 flags.go:64] FLAG: --enable-load-reader="false" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.899672 4932 flags.go:64] FLAG: --enable-server="true" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.899767 4932 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.899900 4932 flags.go:64] FLAG: --event-burst="100" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.900012 4932 flags.go:64] FLAG: --event-qps="50" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.900114 4932 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.900253 4932 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.900354 4932 flags.go:64] FLAG: --eviction-hard="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.900452 4932 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.900543 4932 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.900635 4932 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.900732 4932 flags.go:64] FLAG: --eviction-soft="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.900844 4932 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.900941 4932 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.901032 4932 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.901123 4932 flags.go:64] FLAG: --experimental-mounter-path="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.901243 4932 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.901340 4932 flags.go:64] FLAG: --fail-swap-on="true" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.901431 4932 flags.go:64] FLAG: --feature-gates="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.901545 4932 flags.go:64] FLAG: --file-check-frequency="20s" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.901642 4932 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.901734 4932 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.901826 4932 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.901926 4932 flags.go:64] FLAG: --healthz-port="10248" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.902019 4932 flags.go:64] FLAG: --help="false" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.902110 4932 flags.go:64] FLAG: --hostname-override="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.902276 4932 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.902384 4932 flags.go:64] FLAG: --http-check-frequency="20s" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.902477 4932 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.902578 4932 flags.go:64] FLAG: --image-credential-provider-config="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.902673 4932 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.902765 4932 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.902856 4932 flags.go:64] FLAG: --image-service-endpoint="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.902947 4932 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.903048 4932 flags.go:64] FLAG: --kube-api-burst="100" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.903143 4932 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.903272 4932 flags.go:64] FLAG: --kube-api-qps="50" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.903367 4932 flags.go:64] FLAG: --kube-reserved="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.903460 4932 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.903550 4932 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.903643 4932 flags.go:64] FLAG: --kubelet-cgroups="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.903776 4932 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.903874 4932 flags.go:64] FLAG: --lock-file="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.903967 4932 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.904059 4932 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.904152 4932 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.904341 4932 flags.go:64] FLAG: --log-json-split-stream="false" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.904440 4932 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.904552 4932 flags.go:64] FLAG: --log-text-split-stream="false" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.904650 4932 flags.go:64] FLAG: --logging-format="text" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.904743 4932 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.904837 4932 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.904928 4932 flags.go:64] FLAG: --manifest-url="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.905018 4932 flags.go:64] FLAG: --manifest-url-header="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.905114 4932 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.905253 4932 flags.go:64] FLAG: --max-open-files="1000000" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.905358 4932 flags.go:64] FLAG: --max-pods="110" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.905452 4932 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.905544 4932 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.905636 4932 flags.go:64] FLAG: --memory-manager-policy="None" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.905727 4932 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.905819 4932 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.905924 4932 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.906019 4932 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.906138 4932 flags.go:64] FLAG: --node-status-max-images="50" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.906264 4932 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.906360 4932 flags.go:64] FLAG: --oom-score-adj="-999" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.906453 4932 flags.go:64] FLAG: --pod-cidr="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.906545 4932 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.906662 4932 flags.go:64] FLAG: --pod-manifest-path="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.906766 4932 flags.go:64] FLAG: --pod-max-pids="-1" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.906863 4932 flags.go:64] FLAG: --pods-per-core="0" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.906955 4932 flags.go:64] FLAG: --port="10250" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.907048 4932 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.907139 4932 flags.go:64] FLAG: --provider-id="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.907365 4932 flags.go:64] FLAG: --qos-reserved="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.907485 4932 flags.go:64] FLAG: --read-only-port="10255" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.907659 4932 flags.go:64] FLAG: --register-node="true" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.907760 4932 flags.go:64] FLAG: --register-schedulable="true" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.907853 4932 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.907970 4932 flags.go:64] FLAG: --registry-burst="10" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.908065 4932 flags.go:64] FLAG: --registry-qps="5" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.908157 4932 flags.go:64] FLAG: --reserved-cpus="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.908321 4932 flags.go:64] FLAG: --reserved-memory="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.908460 4932 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.908559 4932 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.908651 4932 flags.go:64] FLAG: --rotate-certificates="false" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.908742 4932 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.908832 4932 flags.go:64] FLAG: --runonce="false" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.908922 4932 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.909015 4932 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.909117 4932 flags.go:64] FLAG: --seccomp-default="false" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.909243 4932 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.909341 4932 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.909433 4932 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.909525 4932 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.909618 4932 flags.go:64] FLAG: --storage-driver-password="root" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.909709 4932 flags.go:64] FLAG: --storage-driver-secure="false" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.909817 4932 flags.go:64] FLAG: --storage-driver-table="stats" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.909913 4932 flags.go:64] FLAG: --storage-driver-user="root" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.910013 4932 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.910107 4932 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.910231 4932 flags.go:64] FLAG: --system-cgroups="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.910389 4932 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.910492 4932 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.910602 4932 flags.go:64] FLAG: --tls-cert-file="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.910707 4932 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.910804 4932 flags.go:64] FLAG: --tls-min-version="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.910896 4932 flags.go:64] FLAG: --tls-private-key-file="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.910986 4932 flags.go:64] FLAG: --topology-manager-policy="none" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.911078 4932 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.911195 4932 flags.go:64] FLAG: --topology-manager-scope="container" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.911330 4932 flags.go:64] FLAG: --v="2" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.911430 4932 flags.go:64] FLAG: --version="false" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.911481 4932 flags.go:64] FLAG: --vmodule="" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.911505 4932 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.911520 4932 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912002 4932 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912044 4932 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912064 4932 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912078 4932 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912088 4932 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912100 4932 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912110 4932 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912120 4932 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912131 4932 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912141 4932 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912151 4932 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912161 4932 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912221 4932 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912234 4932 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912246 4932 feature_gate.go:330] unrecognized feature gate: Example Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912256 4932 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912266 4932 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912276 4932 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912286 4932 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912297 4932 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912306 4932 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912316 4932 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912325 4932 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912334 4932 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912343 4932 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912352 4932 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912361 4932 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912370 4932 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912379 4932 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912390 4932 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912401 4932 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912410 4932 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912419 4932 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912431 4932 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912442 4932 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912451 4932 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912461 4932 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912472 4932 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912485 4932 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912497 4932 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912509 4932 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912519 4932 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912530 4932 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912539 4932 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912550 4932 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912560 4932 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912572 4932 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912584 4932 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912593 4932 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912602 4932 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912613 4932 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912623 4932 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912632 4932 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912642 4932 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912656 4932 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912671 4932 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912682 4932 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912696 4932 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912710 4932 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912724 4932 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912736 4932 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912748 4932 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912758 4932 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912767 4932 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912779 4932 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912792 4932 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912805 4932 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912817 4932 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912828 4932 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912837 4932 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.912848 4932 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.912866 4932 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.927399 4932 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.927468 4932 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.927638 4932 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.927654 4932 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.927665 4932 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.927675 4932 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.927684 4932 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.927694 4932 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.927703 4932 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.927712 4932 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.927721 4932 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.927730 4932 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.927738 4932 feature_gate.go:330] unrecognized feature gate: Example Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.927746 4932 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.927754 4932 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.927762 4932 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.927770 4932 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.927779 4932 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.927787 4932 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.927795 4932 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.927804 4932 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.927832 4932 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.927840 4932 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.927848 4932 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.927856 4932 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.927864 4932 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.927872 4932 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928262 4932 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928278 4932 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928290 4932 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928308 4932 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928317 4932 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928326 4932 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928335 4932 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928343 4932 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928354 4932 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928367 4932 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928375 4932 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928384 4932 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928392 4932 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928400 4932 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928409 4932 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928417 4932 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928425 4932 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928434 4932 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928442 4932 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928450 4932 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928458 4932 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928466 4932 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928474 4932 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928482 4932 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928490 4932 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928498 4932 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928506 4932 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928514 4932 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928521 4932 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928529 4932 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928538 4932 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928545 4932 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928556 4932 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928564 4932 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928572 4932 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928580 4932 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928588 4932 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928599 4932 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928609 4932 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928620 4932 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928630 4932 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928638 4932 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928646 4932 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928654 4932 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928662 4932 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928673 4932 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.928688 4932 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928961 4932 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928974 4932 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928984 4932 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.928994 4932 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929003 4932 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929014 4932 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929023 4932 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929031 4932 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929040 4932 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929048 4932 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929056 4932 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929065 4932 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929073 4932 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929082 4932 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929091 4932 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929099 4932 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929107 4932 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929115 4932 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929123 4932 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929132 4932 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929139 4932 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929150 4932 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929160 4932 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929169 4932 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929202 4932 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929210 4932 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929218 4932 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929226 4932 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929236 4932 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929246 4932 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929254 4932 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929262 4932 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929270 4932 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929282 4932 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929293 4932 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929302 4932 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929311 4932 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929320 4932 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929328 4932 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929337 4932 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929344 4932 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929353 4932 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929360 4932 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929368 4932 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929376 4932 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929384 4932 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929392 4932 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929400 4932 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929408 4932 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929415 4932 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929425 4932 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929433 4932 feature_gate.go:330] unrecognized feature gate: Example Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929441 4932 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929449 4932 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929456 4932 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929465 4932 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929472 4932 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929480 4932 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929488 4932 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929496 4932 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929504 4932 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929512 4932 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929520 4932 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929528 4932 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929535 4932 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929543 4932 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929552 4932 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929560 4932 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929567 4932 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929577 4932 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 19:33:56 crc kubenswrapper[4932]: W0218 19:33:56.929587 4932 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.929601 4932 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.929941 4932 server.go:940] "Client rotation is on, will bootstrap in background" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.935911 4932 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.936056 4932 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.938006 4932 server.go:997] "Starting client certificate rotation" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.938057 4932 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.938353 4932 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-18 06:42:12.971647559 +0000 UTC Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.938477 4932 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.970749 4932 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 19:33:56 crc kubenswrapper[4932]: E0218 19:33:56.975961 4932 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.190:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.976482 4932 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 19:33:56 crc kubenswrapper[4932]: I0218 19:33:56.995702 4932 log.go:25] "Validated CRI v1 runtime API" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.038868 4932 log.go:25] "Validated CRI v1 image API" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.041873 4932 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.048686 4932 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-18-19-28-42-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.048724 4932 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.078750 4932 manager.go:217] Machine: {Timestamp:2026-02-18 19:33:57.074027553 +0000 UTC m=+0.655982438 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:ded33a9e-53d9-4a60-ad08-559411f62337 BootID:bf5b35af-cf95-424f-9da2-9aceebb0ceec Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:20:a0:bb Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:20:a0:bb Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:18:6e:36 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:28:e3:75 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:f3:10:56 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:ba:7d:b0 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:92:f6:01:ea:55:a3 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:d2:18:3d:3d:78:4f Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.079433 4932 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.079734 4932 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.080318 4932 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.080757 4932 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.080820 4932 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.081214 4932 topology_manager.go:138] "Creating topology manager with none policy" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.081236 4932 container_manager_linux.go:303] "Creating device plugin manager" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.081835 4932 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.081908 4932 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.082940 4932 state_mem.go:36] "Initialized new in-memory state store" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.083113 4932 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.086663 4932 kubelet.go:418] "Attempting to sync node with API server" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.086703 4932 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.086723 4932 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.086739 4932 kubelet.go:324] "Adding apiserver pod source" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.086755 4932 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 18 19:33:57 crc kubenswrapper[4932]: W0218 19:33:57.092762 4932 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Feb 18 19:33:57 crc kubenswrapper[4932]: W0218 19:33:57.092841 4932 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Feb 18 19:33:57 crc kubenswrapper[4932]: E0218 19:33:57.092890 4932 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.190:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:33:57 crc kubenswrapper[4932]: E0218 19:33:57.092933 4932 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.190:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.092933 4932 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.094313 4932 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.095989 4932 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.097808 4932 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.097851 4932 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.097868 4932 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.097884 4932 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.097906 4932 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.097920 4932 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.097934 4932 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.097956 4932 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.097973 4932 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.097991 4932 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.098023 4932 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.098038 4932 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.099870 4932 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.100757 4932 server.go:1280] "Started kubelet" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.101381 4932 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.103261 4932 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.103490 4932 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.103927 4932 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.104331 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.104365 4932 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.104672 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 18:09:40.089156084 +0000 UTC Feb 18 19:33:57 crc systemd[1]: Started Kubernetes Kubelet. Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.105881 4932 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.105901 4932 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.105995 4932 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 18 19:33:57 crc kubenswrapper[4932]: E0218 19:33:57.105000 4932 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.106747 4932 factory.go:55] Registering systemd factory Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.106771 4932 factory.go:221] Registration of the systemd container factory successfully Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.108260 4932 factory.go:153] Registering CRI-O factory Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.108323 4932 factory.go:221] Registration of the crio container factory successfully Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.108470 4932 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.108511 4932 factory.go:103] Registering Raw factory Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.108547 4932 manager.go:1196] Started watching for new ooms in manager Feb 18 19:33:57 crc kubenswrapper[4932]: W0218 19:33:57.109367 4932 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Feb 18 19:33:57 crc kubenswrapper[4932]: E0218 19:33:57.109495 4932 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.190:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:33:57 crc kubenswrapper[4932]: E0218 19:33:57.110362 4932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="200ms" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.120224 4932 manager.go:319] Starting recovery of all containers Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.128080 4932 server.go:460] "Adding debug handlers to kubelet server" Feb 18 19:33:57 crc kubenswrapper[4932]: E0218 19:33:57.126680 4932 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.190:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18956e3d1727e610 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 19:33:57.100709392 +0000 UTC m=+0.682664277,LastTimestamp:2026-02-18 19:33:57.100709392 +0000 UTC m=+0.682664277,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.142792 4932 manager.go:324] Recovery completed Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.142876 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.142964 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143015 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143040 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143066 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143091 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143117 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143145 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143203 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143238 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143263 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143283 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143307 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143331 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143349 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143369 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143394 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143421 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143447 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143473 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143499 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143528 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143555 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143579 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143602 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143631 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143663 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143692 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143720 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143747 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143773 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143799 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143827 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143851 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143876 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143899 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143926 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143951 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.143977 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.144006 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.144036 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.144061 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.144086 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.144115 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.144140 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.144166 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.144265 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.144295 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.144326 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.144352 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.144380 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.144404 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.144438 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.144467 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.144493 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.144518 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.144545 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.144570 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.144597 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.144622 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.144646 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.144671 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.144695 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.144719 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.144747 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.144771 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.144798 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.144821 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.144850 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.144874 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.144900 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.144926 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.144951 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.145006 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.145032 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.145058 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.145084 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.145109 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.145134 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.145213 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.145247 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.145276 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.145302 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.145327 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.145355 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.145384 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.145411 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.145437 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.145464 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.145494 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.145520 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.145546 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.145575 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.145603 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.145631 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.145657 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.145687 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.145715 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.145743 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.145768 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.145796 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.145825 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.145850 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.145876 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.146510 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.146544 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.146574 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.146605 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150166 4932 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150271 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150303 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150331 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150356 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150375 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150390 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150405 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150420 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150434 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150448 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150463 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150479 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150491 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150504 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150518 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150535 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150551 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150565 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150579 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150595 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150610 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150625 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150640 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150655 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150670 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150685 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150699 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150716 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150730 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150746 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150760 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150775 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150789 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150817 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150832 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150847 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150861 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150875 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150890 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150904 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150918 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150933 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150953 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150973 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.150993 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151012 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151026 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151041 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151055 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151069 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151083 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151099 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151115 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151143 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151157 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151189 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151205 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151220 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151239 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151255 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151267 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151282 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151299 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151318 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151337 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151350 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151363 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151376 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151388 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151402 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151416 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151429 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151443 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151456 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151471 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151485 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151500 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151513 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151528 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151541 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151556 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151570 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151583 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151598 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151611 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151625 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151638 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151652 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151666 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151680 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151694 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151709 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151724 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151737 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151751 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151765 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151780 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151793 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151807 4932 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151822 4932 reconstruct.go:97] "Volume reconstruction finished" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.151831 4932 reconciler.go:26] "Reconciler: start to sync state" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.152408 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.154274 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.154316 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.154332 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.155492 4932 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.155512 4932 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.155531 4932 state_mem.go:36] "Initialized new in-memory state store" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.172819 4932 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.177769 4932 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.177851 4932 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.177886 4932 kubelet.go:2335] "Starting kubelet main sync loop" Feb 18 19:33:57 crc kubenswrapper[4932]: E0218 19:33:57.177968 4932 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 18 19:33:57 crc kubenswrapper[4932]: W0218 19:33:57.178889 4932 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Feb 18 19:33:57 crc kubenswrapper[4932]: E0218 19:33:57.179033 4932 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.190:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.181970 4932 policy_none.go:49] "None policy: Start" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.183351 4932 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.183395 4932 state_mem.go:35] "Initializing new in-memory state store" Feb 18 19:33:57 crc kubenswrapper[4932]: E0218 19:33:57.207493 4932 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.242625 4932 manager.go:334] "Starting Device Plugin manager" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.242713 4932 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.242738 4932 server.go:79] "Starting device plugin registration server" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.243424 4932 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.243462 4932 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.243746 4932 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.244000 4932 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.244022 4932 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 18 19:33:57 crc kubenswrapper[4932]: E0218 19:33:57.250779 4932 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.279059 4932 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.279224 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.280923 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.280973 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.280993 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.281244 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.281855 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.281980 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.282763 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.282808 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.282822 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.282996 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.283151 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.283236 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.284016 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.284074 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.284092 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.284014 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.284302 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.284354 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.284803 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.284835 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.284862 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.284887 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.284869 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.285016 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.286001 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.286021 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.286037 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.286652 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.286705 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.286725 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.286915 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.287142 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.287259 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.288303 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.288339 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.288360 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.288580 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.288626 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.288660 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.288708 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.288727 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.289625 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.289698 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.289729 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:33:57 crc kubenswrapper[4932]: E0218 19:33:57.318446 4932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="400ms" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.343974 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.345288 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.345333 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.345348 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.345382 4932 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:33:57 crc kubenswrapper[4932]: E0218 19:33:57.345906 4932 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.190:6443: connect: connection refused" node="crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.355138 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.355250 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.355298 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.355346 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.355384 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.355424 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.355459 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.355498 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.355533 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.355569 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.355606 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.355643 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.355692 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.355730 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.355796 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.457464 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.458006 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.458049 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.458064 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.457757 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.458128 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.458213 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.458082 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.458294 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.458330 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.458364 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.458403 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.458439 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.458464 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.458507 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.458529 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.458475 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.458574 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.458585 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.458613 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.458581 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.458652 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.458688 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.458699 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.458764 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.458723 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.458750 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.458892 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.458967 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.458718 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.546863 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.548790 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.548909 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.548937 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.548984 4932 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:33:57 crc kubenswrapper[4932]: E0218 19:33:57.550329 4932 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.190:6443: connect: connection refused" node="crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.618913 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.628922 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.644749 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.667341 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.674742 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:33:57 crc kubenswrapper[4932]: W0218 19:33:57.682068 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-2e2e098498deaef3c8f7a3656fdd0b479662ff9d6287ead463005968b867b2ff WatchSource:0}: Error finding container 2e2e098498deaef3c8f7a3656fdd0b479662ff9d6287ead463005968b867b2ff: Status 404 returned error can't find the container with id 2e2e098498deaef3c8f7a3656fdd0b479662ff9d6287ead463005968b867b2ff Feb 18 19:33:57 crc kubenswrapper[4932]: W0218 19:33:57.682715 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-e4d21a161421bda0c242cfb98dea1be9b5bb0c936ee5950b5b64af1cf81fe8be WatchSource:0}: Error finding container e4d21a161421bda0c242cfb98dea1be9b5bb0c936ee5950b5b64af1cf81fe8be: Status 404 returned error can't find the container with id e4d21a161421bda0c242cfb98dea1be9b5bb0c936ee5950b5b64af1cf81fe8be Feb 18 19:33:57 crc kubenswrapper[4932]: W0218 19:33:57.690829 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-4bb24564de308c177ee69425cf670690192f71c4a2847f6041418bc9eec88751 WatchSource:0}: Error finding container 4bb24564de308c177ee69425cf670690192f71c4a2847f6041418bc9eec88751: Status 404 returned error can't find the container with id 4bb24564de308c177ee69425cf670690192f71c4a2847f6041418bc9eec88751 Feb 18 19:33:57 crc kubenswrapper[4932]: W0218 19:33:57.700796 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-c98daf293fb27a7c55e5d961e9da353f1b179aa25139bb3d2b2479abddf72b10 WatchSource:0}: Error finding container c98daf293fb27a7c55e5d961e9da353f1b179aa25139bb3d2b2479abddf72b10: Status 404 returned error can't find the container with id c98daf293fb27a7c55e5d961e9da353f1b179aa25139bb3d2b2479abddf72b10 Feb 18 19:33:57 crc kubenswrapper[4932]: E0218 19:33:57.720146 4932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="800ms" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.951550 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.953278 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.953329 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.953342 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:33:57 crc kubenswrapper[4932]: I0218 19:33:57.953372 4932 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:33:57 crc kubenswrapper[4932]: E0218 19:33:57.954130 4932 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.190:6443: connect: connection refused" node="crc" Feb 18 19:33:58 crc kubenswrapper[4932]: I0218 19:33:58.104919 4932 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Feb 18 19:33:58 crc kubenswrapper[4932]: I0218 19:33:58.105822 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 11:01:47.557907234 +0000 UTC Feb 18 19:33:58 crc kubenswrapper[4932]: I0218 19:33:58.186297 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4bb24564de308c177ee69425cf670690192f71c4a2847f6041418bc9eec88751"} Feb 18 19:33:58 crc kubenswrapper[4932]: I0218 19:33:58.187933 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e4d21a161421bda0c242cfb98dea1be9b5bb0c936ee5950b5b64af1cf81fe8be"} Feb 18 19:33:58 crc kubenswrapper[4932]: I0218 19:33:58.189742 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2e2e098498deaef3c8f7a3656fdd0b479662ff9d6287ead463005968b867b2ff"} Feb 18 19:33:58 crc kubenswrapper[4932]: I0218 19:33:58.191509 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"2c3740b0be7dd56c31cab013bf0a7330ece3e0adca531740e2b8e58aeb27debe"} Feb 18 19:33:58 crc kubenswrapper[4932]: W0218 19:33:58.192205 4932 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Feb 18 19:33:58 crc kubenswrapper[4932]: E0218 19:33:58.192320 4932 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.190:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:33:58 crc kubenswrapper[4932]: I0218 19:33:58.193592 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c98daf293fb27a7c55e5d961e9da353f1b179aa25139bb3d2b2479abddf72b10"} Feb 18 19:33:58 crc kubenswrapper[4932]: W0218 19:33:58.196670 4932 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Feb 18 19:33:58 crc kubenswrapper[4932]: E0218 19:33:58.196783 4932 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.190:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:33:58 crc kubenswrapper[4932]: W0218 19:33:58.272029 4932 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Feb 18 19:33:58 crc kubenswrapper[4932]: E0218 19:33:58.272228 4932 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.190:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:33:58 crc kubenswrapper[4932]: E0218 19:33:58.520927 4932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="1.6s" Feb 18 19:33:58 crc kubenswrapper[4932]: W0218 19:33:58.659130 4932 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Feb 18 19:33:58 crc kubenswrapper[4932]: E0218 19:33:58.659300 4932 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.190:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:33:58 crc kubenswrapper[4932]: I0218 19:33:58.754293 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:33:58 crc kubenswrapper[4932]: I0218 19:33:58.756414 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:33:58 crc kubenswrapper[4932]: I0218 19:33:58.756608 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:33:58 crc kubenswrapper[4932]: I0218 19:33:58.756746 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:33:58 crc kubenswrapper[4932]: I0218 19:33:58.756902 4932 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:33:58 crc kubenswrapper[4932]: E0218 19:33:58.757670 4932 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.190:6443: connect: connection refused" node="crc" Feb 18 19:33:59 crc kubenswrapper[4932]: I0218 19:33:59.090282 4932 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 19:33:59 crc kubenswrapper[4932]: E0218 19:33:59.092436 4932 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.190:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:33:59 crc kubenswrapper[4932]: I0218 19:33:59.104628 4932 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Feb 18 19:33:59 crc kubenswrapper[4932]: I0218 19:33:59.106878 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 05:11:33.407815011 +0000 UTC Feb 18 19:33:59 crc kubenswrapper[4932]: I0218 19:33:59.201230 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28"} Feb 18 19:33:59 crc kubenswrapper[4932]: I0218 19:33:59.201296 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e"} Feb 18 19:33:59 crc kubenswrapper[4932]: I0218 19:33:59.201317 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04"} Feb 18 19:33:59 crc kubenswrapper[4932]: I0218 19:33:59.206323 4932 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c" exitCode=0 Feb 18 19:33:59 crc kubenswrapper[4932]: I0218 19:33:59.206445 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c"} Feb 18 19:33:59 crc kubenswrapper[4932]: I0218 19:33:59.206561 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:33:59 crc kubenswrapper[4932]: I0218 19:33:59.208470 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:33:59 crc kubenswrapper[4932]: I0218 19:33:59.208515 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:33:59 crc kubenswrapper[4932]: I0218 19:33:59.208645 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:33:59 crc kubenswrapper[4932]: I0218 19:33:59.209108 4932 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="fa20d1549113cb3aad03ce60838085f0ad49599a6fb652c6818377b9baee6edc" exitCode=0 Feb 18 19:33:59 crc kubenswrapper[4932]: I0218 19:33:59.209230 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"fa20d1549113cb3aad03ce60838085f0ad49599a6fb652c6818377b9baee6edc"} Feb 18 19:33:59 crc kubenswrapper[4932]: I0218 19:33:59.209403 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:33:59 crc kubenswrapper[4932]: I0218 19:33:59.211153 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:33:59 crc kubenswrapper[4932]: I0218 19:33:59.211243 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:33:59 crc kubenswrapper[4932]: I0218 19:33:59.211262 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:33:59 crc kubenswrapper[4932]: I0218 19:33:59.212529 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:33:59 crc kubenswrapper[4932]: I0218 19:33:59.212879 4932 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="8509e4d576796f6d37306cabb9830536406b365ffc86a32c4e492ffd91e7d9eb" exitCode=0 Feb 18 19:33:59 crc kubenswrapper[4932]: I0218 19:33:59.212989 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:33:59 crc kubenswrapper[4932]: I0218 19:33:59.213009 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"8509e4d576796f6d37306cabb9830536406b365ffc86a32c4e492ffd91e7d9eb"} Feb 18 19:33:59 crc kubenswrapper[4932]: I0218 19:33:59.214396 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:33:59 crc kubenswrapper[4932]: I0218 19:33:59.214464 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:33:59 crc kubenswrapper[4932]: I0218 19:33:59.214492 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:33:59 crc kubenswrapper[4932]: I0218 19:33:59.214638 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:33:59 crc kubenswrapper[4932]: I0218 19:33:59.214677 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:33:59 crc kubenswrapper[4932]: I0218 19:33:59.214700 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:33:59 crc kubenswrapper[4932]: I0218 19:33:59.216442 4932 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9" exitCode=0 Feb 18 19:33:59 crc kubenswrapper[4932]: I0218 19:33:59.216504 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9"} Feb 18 19:33:59 crc kubenswrapper[4932]: I0218 19:33:59.216627 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:33:59 crc kubenswrapper[4932]: I0218 19:33:59.218589 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:33:59 crc kubenswrapper[4932]: I0218 19:33:59.218634 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:33:59 crc kubenswrapper[4932]: I0218 19:33:59.218655 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:33:59 crc kubenswrapper[4932]: W0218 19:33:59.942284 4932 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Feb 18 19:33:59 crc kubenswrapper[4932]: E0218 19:33:59.942765 4932 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.190:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:34:00 crc kubenswrapper[4932]: I0218 19:34:00.104998 4932 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Feb 18 19:34:00 crc kubenswrapper[4932]: I0218 19:34:00.107202 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 23:36:01.469432467 +0000 UTC Feb 18 19:34:00 crc kubenswrapper[4932]: E0218 19:34:00.122084 4932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="3.2s" Feb 18 19:34:00 crc kubenswrapper[4932]: E0218 19:34:00.143567 4932 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.190:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18956e3d1727e610 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 19:33:57.100709392 +0000 UTC m=+0.682664277,LastTimestamp:2026-02-18 19:33:57.100709392 +0000 UTC m=+0.682664277,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 19:34:00 crc kubenswrapper[4932]: W0218 19:34:00.212469 4932 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Feb 18 19:34:00 crc kubenswrapper[4932]: E0218 19:34:00.212574 4932 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.190:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:34:00 crc kubenswrapper[4932]: W0218 19:34:00.216564 4932 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Feb 18 19:34:00 crc kubenswrapper[4932]: E0218 19:34:00.216684 4932 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.190:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:34:00 crc kubenswrapper[4932]: I0218 19:34:00.220793 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"dd80983bc05658f4dacedf042b5c669290255dd503bccbc9164ad48e35e7d052"} Feb 18 19:34:00 crc kubenswrapper[4932]: I0218 19:34:00.220825 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:34:00 crc kubenswrapper[4932]: I0218 19:34:00.223413 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:00 crc kubenswrapper[4932]: I0218 19:34:00.223457 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:00 crc kubenswrapper[4932]: I0218 19:34:00.223467 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:00 crc kubenswrapper[4932]: I0218 19:34:00.224353 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"381008993cfc5f59da6f8dc90f823fbbb1ab84e53aa86978152d00b078452802"} Feb 18 19:34:00 crc kubenswrapper[4932]: I0218 19:34:00.224412 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8efe5587ce56ca0dce30a3e010094421a89f4f6713c04baa601f96d1d5919248"} Feb 18 19:34:00 crc kubenswrapper[4932]: I0218 19:34:00.224433 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d3d49b33110c005074c926cb27774369c1aa68dbc56d47ed3fa29456a5b5e672"} Feb 18 19:34:00 crc kubenswrapper[4932]: I0218 19:34:00.224455 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:34:00 crc kubenswrapper[4932]: I0218 19:34:00.227365 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:00 crc kubenswrapper[4932]: I0218 19:34:00.227401 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:00 crc kubenswrapper[4932]: I0218 19:34:00.227415 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:00 crc kubenswrapper[4932]: I0218 19:34:00.229741 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b"} Feb 18 19:34:00 crc kubenswrapper[4932]: I0218 19:34:00.229804 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:34:00 crc kubenswrapper[4932]: I0218 19:34:00.231341 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:00 crc kubenswrapper[4932]: I0218 19:34:00.231381 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:00 crc kubenswrapper[4932]: I0218 19:34:00.231395 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:00 crc kubenswrapper[4932]: I0218 19:34:00.234037 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0"} Feb 18 19:34:00 crc kubenswrapper[4932]: I0218 19:34:00.234070 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18"} Feb 18 19:34:00 crc kubenswrapper[4932]: I0218 19:34:00.234088 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7"} Feb 18 19:34:00 crc kubenswrapper[4932]: I0218 19:34:00.235642 4932 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="676f220fbabbdf1f764a0ac9856dcfc9d8f6543b96228f378b3bd30c8ab34986" exitCode=0 Feb 18 19:34:00 crc kubenswrapper[4932]: I0218 19:34:00.235683 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"676f220fbabbdf1f764a0ac9856dcfc9d8f6543b96228f378b3bd30c8ab34986"} Feb 18 19:34:00 crc kubenswrapper[4932]: I0218 19:34:00.235883 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:34:00 crc kubenswrapper[4932]: I0218 19:34:00.236869 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:00 crc kubenswrapper[4932]: I0218 19:34:00.236906 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:00 crc kubenswrapper[4932]: I0218 19:34:00.236920 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:00 crc kubenswrapper[4932]: I0218 19:34:00.357988 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:34:00 crc kubenswrapper[4932]: I0218 19:34:00.364892 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:00 crc kubenswrapper[4932]: I0218 19:34:00.364944 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:00 crc kubenswrapper[4932]: I0218 19:34:00.364966 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:00 crc kubenswrapper[4932]: I0218 19:34:00.364999 4932 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:34:00 crc kubenswrapper[4932]: E0218 19:34:00.365617 4932 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.190:6443: connect: connection refused" node="crc" Feb 18 19:34:00 crc kubenswrapper[4932]: I0218 19:34:00.655005 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:34:00 crc kubenswrapper[4932]: I0218 19:34:00.665491 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:34:01 crc kubenswrapper[4932]: I0218 19:34:01.107373 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 17:57:54.750093858 +0000 UTC Feb 18 19:34:01 crc kubenswrapper[4932]: I0218 19:34:01.253712 4932 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="05c608056602e82f1d72f327241bccbf4b1a4f33f9e8512cbf3c44689c7e7ec0" exitCode=0 Feb 18 19:34:01 crc kubenswrapper[4932]: I0218 19:34:01.253822 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"05c608056602e82f1d72f327241bccbf4b1a4f33f9e8512cbf3c44689c7e7ec0"} Feb 18 19:34:01 crc kubenswrapper[4932]: I0218 19:34:01.253929 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:34:01 crc kubenswrapper[4932]: I0218 19:34:01.255481 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:01 crc kubenswrapper[4932]: I0218 19:34:01.255544 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:01 crc kubenswrapper[4932]: I0218 19:34:01.255564 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:01 crc kubenswrapper[4932]: I0218 19:34:01.261259 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203"} Feb 18 19:34:01 crc kubenswrapper[4932]: I0218 19:34:01.261339 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601"} Feb 18 19:34:01 crc kubenswrapper[4932]: I0218 19:34:01.261392 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:34:01 crc kubenswrapper[4932]: I0218 19:34:01.261556 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:34:01 crc kubenswrapper[4932]: I0218 19:34:01.261826 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:34:01 crc kubenswrapper[4932]: I0218 19:34:01.262236 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:34:01 crc kubenswrapper[4932]: I0218 19:34:01.262245 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:34:01 crc kubenswrapper[4932]: I0218 19:34:01.262758 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:01 crc kubenswrapper[4932]: I0218 19:34:01.262782 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:01 crc kubenswrapper[4932]: I0218 19:34:01.262793 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:01 crc kubenswrapper[4932]: I0218 19:34:01.263658 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:01 crc kubenswrapper[4932]: I0218 19:34:01.263675 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:01 crc kubenswrapper[4932]: I0218 19:34:01.263683 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:01 crc kubenswrapper[4932]: I0218 19:34:01.263796 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:01 crc kubenswrapper[4932]: I0218 19:34:01.263861 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:01 crc kubenswrapper[4932]: I0218 19:34:01.263883 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:01 crc kubenswrapper[4932]: I0218 19:34:01.264283 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:01 crc kubenswrapper[4932]: I0218 19:34:01.264473 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:01 crc kubenswrapper[4932]: I0218 19:34:01.264608 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:02 crc kubenswrapper[4932]: I0218 19:34:02.107945 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 06:13:26.030531957 +0000 UTC Feb 18 19:34:02 crc kubenswrapper[4932]: I0218 19:34:02.270549 4932 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:34:02 crc kubenswrapper[4932]: I0218 19:34:02.270570 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"dfac026c12a8bf498c2bb79250930f207d0064ffd0edbef2e5e24cfa93a62971"} Feb 18 19:34:02 crc kubenswrapper[4932]: I0218 19:34:02.270665 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:34:02 crc kubenswrapper[4932]: I0218 19:34:02.270690 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:34:02 crc kubenswrapper[4932]: I0218 19:34:02.270721 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"23ec1738bcf1d2dd647db0d373af934b11154dff53044ff834fe7257b32f17d1"} Feb 18 19:34:02 crc kubenswrapper[4932]: I0218 19:34:02.270618 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:34:02 crc kubenswrapper[4932]: I0218 19:34:02.270741 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c6cb57625b5582e558b55c361284729fc2052214bf528e7458937568887515e9"} Feb 18 19:34:02 crc kubenswrapper[4932]: I0218 19:34:02.270812 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:34:02 crc kubenswrapper[4932]: I0218 19:34:02.272235 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:02 crc kubenswrapper[4932]: I0218 19:34:02.272303 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:02 crc kubenswrapper[4932]: I0218 19:34:02.272243 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:02 crc kubenswrapper[4932]: I0218 19:34:02.272332 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:02 crc kubenswrapper[4932]: I0218 19:34:02.272359 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:02 crc kubenswrapper[4932]: I0218 19:34:02.272381 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:02 crc kubenswrapper[4932]: I0218 19:34:02.273401 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:02 crc kubenswrapper[4932]: I0218 19:34:02.273484 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:02 crc kubenswrapper[4932]: I0218 19:34:02.273505 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:02 crc kubenswrapper[4932]: I0218 19:34:02.813533 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:34:03 crc kubenswrapper[4932]: I0218 19:34:03.108418 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 22:35:33.194578246 +0000 UTC Feb 18 19:34:03 crc kubenswrapper[4932]: I0218 19:34:03.281671 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"eb79fa7b7288b35bb4bcd652d79107019527c1171639893fef92b89d26303412"} Feb 18 19:34:03 crc kubenswrapper[4932]: I0218 19:34:03.281738 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"13d6e02f2655abbd491fd87d24365a4cd72db2765eb2c05c5553febfa7be962a"} Feb 18 19:34:03 crc kubenswrapper[4932]: I0218 19:34:03.281775 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:34:03 crc kubenswrapper[4932]: I0218 19:34:03.281784 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:34:03 crc kubenswrapper[4932]: I0218 19:34:03.283496 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:03 crc kubenswrapper[4932]: I0218 19:34:03.283587 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:03 crc kubenswrapper[4932]: I0218 19:34:03.283609 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:03 crc kubenswrapper[4932]: I0218 19:34:03.283668 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:03 crc kubenswrapper[4932]: I0218 19:34:03.283721 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:03 crc kubenswrapper[4932]: I0218 19:34:03.283745 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:03 crc kubenswrapper[4932]: I0218 19:34:03.475891 4932 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 19:34:03 crc kubenswrapper[4932]: I0218 19:34:03.566464 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:34:03 crc kubenswrapper[4932]: I0218 19:34:03.568308 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:03 crc kubenswrapper[4932]: I0218 19:34:03.568361 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:03 crc kubenswrapper[4932]: I0218 19:34:03.568375 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:03 crc kubenswrapper[4932]: I0218 19:34:03.568415 4932 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:34:03 crc kubenswrapper[4932]: I0218 19:34:03.873588 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:34:04 crc kubenswrapper[4932]: I0218 19:34:04.109312 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 09:12:44.647242854 +0000 UTC Feb 18 19:34:04 crc kubenswrapper[4932]: I0218 19:34:04.284719 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:34:04 crc kubenswrapper[4932]: I0218 19:34:04.284719 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:34:04 crc kubenswrapper[4932]: I0218 19:34:04.286231 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:04 crc kubenswrapper[4932]: I0218 19:34:04.286264 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:04 crc kubenswrapper[4932]: I0218 19:34:04.286273 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:04 crc kubenswrapper[4932]: I0218 19:34:04.286430 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:04 crc kubenswrapper[4932]: I0218 19:34:04.286488 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:04 crc kubenswrapper[4932]: I0218 19:34:04.286509 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:04 crc kubenswrapper[4932]: I0218 19:34:04.661407 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 18 19:34:05 crc kubenswrapper[4932]: I0218 19:34:05.110424 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 10:42:00.395155693 +0000 UTC Feb 18 19:34:05 crc kubenswrapper[4932]: I0218 19:34:05.287585 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:34:05 crc kubenswrapper[4932]: I0218 19:34:05.287705 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:34:05 crc kubenswrapper[4932]: I0218 19:34:05.289310 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:05 crc kubenswrapper[4932]: I0218 19:34:05.289344 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:05 crc kubenswrapper[4932]: I0218 19:34:05.289386 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:05 crc kubenswrapper[4932]: I0218 19:34:05.289404 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:05 crc kubenswrapper[4932]: I0218 19:34:05.289360 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:05 crc kubenswrapper[4932]: I0218 19:34:05.289541 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:06 crc kubenswrapper[4932]: I0218 19:34:06.110740 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 08:41:54.947269104 +0000 UTC Feb 18 19:34:07 crc kubenswrapper[4932]: I0218 19:34:07.111548 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 00:26:56.063491197 +0000 UTC Feb 18 19:34:07 crc kubenswrapper[4932]: E0218 19:34:07.250956 4932 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 19:34:07 crc kubenswrapper[4932]: I0218 19:34:07.308996 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:34:07 crc kubenswrapper[4932]: I0218 19:34:07.309215 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:34:07 crc kubenswrapper[4932]: I0218 19:34:07.310399 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:07 crc kubenswrapper[4932]: I0218 19:34:07.310432 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:07 crc kubenswrapper[4932]: I0218 19:34:07.310442 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:07 crc kubenswrapper[4932]: I0218 19:34:07.761630 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 18 19:34:07 crc kubenswrapper[4932]: I0218 19:34:07.762755 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:34:07 crc kubenswrapper[4932]: I0218 19:34:07.766131 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:07 crc kubenswrapper[4932]: I0218 19:34:07.766224 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:07 crc kubenswrapper[4932]: I0218 19:34:07.766238 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:08 crc kubenswrapper[4932]: I0218 19:34:08.112378 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 06:13:02.05298446 +0000 UTC Feb 18 19:34:08 crc kubenswrapper[4932]: I0218 19:34:08.657051 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:34:08 crc kubenswrapper[4932]: I0218 19:34:08.657474 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:34:08 crc kubenswrapper[4932]: I0218 19:34:08.659378 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:08 crc kubenswrapper[4932]: I0218 19:34:08.659435 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:08 crc kubenswrapper[4932]: I0218 19:34:08.659456 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:08 crc kubenswrapper[4932]: I0218 19:34:08.662754 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:34:09 crc kubenswrapper[4932]: I0218 19:34:09.113915 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 05:12:57.874346311 +0000 UTC Feb 18 19:34:09 crc kubenswrapper[4932]: I0218 19:34:09.304230 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:34:09 crc kubenswrapper[4932]: I0218 19:34:09.305367 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:09 crc kubenswrapper[4932]: I0218 19:34:09.305441 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:09 crc kubenswrapper[4932]: I0218 19:34:09.305462 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:10 crc kubenswrapper[4932]: I0218 19:34:10.000146 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:34:10 crc kubenswrapper[4932]: I0218 19:34:10.114065 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 21:54:38.172701433 +0000 UTC Feb 18 19:34:10 crc kubenswrapper[4932]: I0218 19:34:10.306965 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:34:10 crc kubenswrapper[4932]: I0218 19:34:10.308264 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:10 crc kubenswrapper[4932]: I0218 19:34:10.308309 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:10 crc kubenswrapper[4932]: I0218 19:34:10.308322 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:11 crc kubenswrapper[4932]: I0218 19:34:11.105590 4932 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 18 19:34:11 crc kubenswrapper[4932]: I0218 19:34:11.115226 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 04:30:31.401022215 +0000 UTC Feb 18 19:34:11 crc kubenswrapper[4932]: I0218 19:34:11.170205 4932 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 18 19:34:11 crc kubenswrapper[4932]: I0218 19:34:11.170298 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 18 19:34:11 crc kubenswrapper[4932]: I0218 19:34:11.178121 4932 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 18 19:34:11 crc kubenswrapper[4932]: I0218 19:34:11.178211 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 18 19:34:12 crc kubenswrapper[4932]: I0218 19:34:12.115604 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 02:18:20.223665441 +0000 UTC Feb 18 19:34:12 crc kubenswrapper[4932]: I0218 19:34:12.822165 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:34:12 crc kubenswrapper[4932]: I0218 19:34:12.822449 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:34:12 crc kubenswrapper[4932]: I0218 19:34:12.823956 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:12 crc kubenswrapper[4932]: I0218 19:34:12.824023 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:12 crc kubenswrapper[4932]: I0218 19:34:12.824042 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:12 crc kubenswrapper[4932]: I0218 19:34:12.829392 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:34:13 crc kubenswrapper[4932]: I0218 19:34:13.001094 4932 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 19:34:13 crc kubenswrapper[4932]: I0218 19:34:13.001249 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 19:34:13 crc kubenswrapper[4932]: I0218 19:34:13.117559 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 15:13:47.529940886 +0000 UTC Feb 18 19:34:13 crc kubenswrapper[4932]: I0218 19:34:13.315760 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:34:13 crc kubenswrapper[4932]: I0218 19:34:13.317254 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:13 crc kubenswrapper[4932]: I0218 19:34:13.317305 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:13 crc kubenswrapper[4932]: I0218 19:34:13.317322 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:14 crc kubenswrapper[4932]: I0218 19:34:14.118542 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 17:33:39.675839321 +0000 UTC Feb 18 19:34:14 crc kubenswrapper[4932]: I0218 19:34:14.695552 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 18 19:34:14 crc kubenswrapper[4932]: I0218 19:34:14.695801 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:34:14 crc kubenswrapper[4932]: I0218 19:34:14.697436 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:14 crc kubenswrapper[4932]: I0218 19:34:14.697491 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:14 crc kubenswrapper[4932]: I0218 19:34:14.697512 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:14 crc kubenswrapper[4932]: I0218 19:34:14.713126 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 18 19:34:15 crc kubenswrapper[4932]: I0218 19:34:15.119169 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 05:46:30.299406566 +0000 UTC Feb 18 19:34:15 crc kubenswrapper[4932]: I0218 19:34:15.322368 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:34:15 crc kubenswrapper[4932]: I0218 19:34:15.323828 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:15 crc kubenswrapper[4932]: I0218 19:34:15.323967 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:15 crc kubenswrapper[4932]: I0218 19:34:15.324100 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:16 crc kubenswrapper[4932]: I0218 19:34:16.119646 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 10:00:53.118030197 +0000 UTC Feb 18 19:34:16 crc kubenswrapper[4932]: E0218 19:34:16.157393 4932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 18 19:34:16 crc kubenswrapper[4932]: I0218 19:34:16.161637 4932 trace.go:236] Trace[1478363106]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 19:34:01.411) (total time: 14749ms): Feb 18 19:34:16 crc kubenswrapper[4932]: Trace[1478363106]: ---"Objects listed" error: 14749ms (19:34:16.161) Feb 18 19:34:16 crc kubenswrapper[4932]: Trace[1478363106]: [14.749648202s] [14.749648202s] END Feb 18 19:34:16 crc kubenswrapper[4932]: I0218 19:34:16.161882 4932 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 18 19:34:16 crc kubenswrapper[4932]: I0218 19:34:16.161938 4932 trace.go:236] Trace[1226745459]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 19:34:03.553) (total time: 12607ms): Feb 18 19:34:16 crc kubenswrapper[4932]: Trace[1226745459]: ---"Objects listed" error: 12607ms (19:34:16.161) Feb 18 19:34:16 crc kubenswrapper[4932]: Trace[1226745459]: [12.60775738s] [12.60775738s] END Feb 18 19:34:16 crc kubenswrapper[4932]: I0218 19:34:16.162299 4932 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 18 19:34:16 crc kubenswrapper[4932]: I0218 19:34:16.167226 4932 trace.go:236] Trace[923529667]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 19:34:03.825) (total time: 12340ms): Feb 18 19:34:16 crc kubenswrapper[4932]: Trace[923529667]: ---"Objects listed" error: 12340ms (19:34:16.166) Feb 18 19:34:16 crc kubenswrapper[4932]: Trace[923529667]: [12.340651489s] [12.340651489s] END Feb 18 19:34:16 crc kubenswrapper[4932]: I0218 19:34:16.167290 4932 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 18 19:34:16 crc kubenswrapper[4932]: E0218 19:34:16.168329 4932 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 18 19:34:16 crc kubenswrapper[4932]: I0218 19:34:16.170678 4932 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 18 19:34:16 crc kubenswrapper[4932]: I0218 19:34:16.172329 4932 trace.go:236] Trace[1107073082]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 19:34:04.055) (total time: 12116ms): Feb 18 19:34:16 crc kubenswrapper[4932]: Trace[1107073082]: ---"Objects listed" error: 12116ms (19:34:16.171) Feb 18 19:34:16 crc kubenswrapper[4932]: Trace[1107073082]: [12.116578067s] [12.116578067s] END Feb 18 19:34:16 crc kubenswrapper[4932]: I0218 19:34:16.172384 4932 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 18 19:34:16 crc kubenswrapper[4932]: I0218 19:34:16.185936 4932 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 18 19:34:16 crc kubenswrapper[4932]: I0218 19:34:16.432677 4932 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36880->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 18 19:34:16 crc kubenswrapper[4932]: I0218 19:34:16.432733 4932 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36896->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 18 19:34:16 crc kubenswrapper[4932]: I0218 19:34:16.432771 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36896->192.168.126.11:17697: read: connection reset by peer" Feb 18 19:34:16 crc kubenswrapper[4932]: I0218 19:34:16.432765 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36880->192.168.126.11:17697: read: connection reset by peer" Feb 18 19:34:16 crc kubenswrapper[4932]: I0218 19:34:16.433271 4932 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 18 19:34:16 crc kubenswrapper[4932]: I0218 19:34:16.433308 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.101548 4932 apiserver.go:52] "Watching apiserver" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.105767 4932 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.106817 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.109005 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.109068 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.109139 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.109210 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.109209 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.109303 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.109482 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.109460 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.109575 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.113526 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.113931 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.115492 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.116913 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.116994 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.117523 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.117763 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.117816 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.117952 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.120009 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 20:51:47.765976753 +0000 UTC Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.142769 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.159123 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.176018 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.176091 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.176129 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.176168 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.176714 4932 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.176814 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.176903 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:17.676867092 +0000 UTC m=+21.258821977 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.177307 4932 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.177416 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:17.677387793 +0000 UTC m=+21.259342678 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.177455 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.177506 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.177543 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.177594 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.177630 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.177662 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.177990 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.178331 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.178456 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.178978 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.179032 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.179114 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.179269 4932 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.182419 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.189155 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.190279 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.204362 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.204406 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.204429 4932 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.204510 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:17.704485028 +0000 UTC m=+21.286439913 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.205886 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.206876 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.207698 4932 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.209244 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.215564 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.215602 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.215624 4932 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.215696 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:17.715674123 +0000 UTC m=+21.297629008 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.222213 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.225843 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.242708 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.259418 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.276347 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280124 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280225 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280268 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280311 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280358 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280415 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280453 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280490 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280529 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280561 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280595 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280631 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280669 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280706 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280742 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280775 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280773 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280810 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.280992 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281002 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281045 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281092 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281137 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281169 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281230 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281266 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281304 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281317 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281340 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281377 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281411 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281447 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281480 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281522 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281556 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281718 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281771 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281808 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281843 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281876 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281924 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281961 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.281996 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282031 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282068 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282102 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282137 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282196 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282231 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282265 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282307 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282346 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282381 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282414 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282447 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282480 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282516 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282553 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282589 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282626 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282661 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282693 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282729 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.282766 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283136 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283204 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283240 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283279 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283314 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283351 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283391 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283428 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283462 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283498 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283533 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283570 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283605 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283637 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283673 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.284536 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.284598 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.284639 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.284675 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.284708 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.284743 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.284777 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.284810 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.284846 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.284883 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.284924 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.286030 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.298224 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283083 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283103 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.283715 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.284252 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.284295 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.284271 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.284545 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.284580 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.285031 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.285122 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.285574 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.285860 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.285893 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.286026 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.286041 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:34:17.785997866 +0000 UTC m=+21.367952751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.307947 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.308144 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.308238 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.308350 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.308406 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.308427 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.308541 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.308591 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.308879 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.308922 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.308958 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.308994 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309028 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309061 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309095 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309129 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309164 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309234 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309279 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309328 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309361 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309396 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309428 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309460 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309494 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309530 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309573 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309609 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309650 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309689 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309744 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309797 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309834 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309874 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309912 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309945 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.309981 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.310015 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.310053 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.310092 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.310128 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.310163 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.310231 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.310265 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.310302 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.310335 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.310371 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.310395 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.310409 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.310503 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.310581 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.310666 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.310843 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.310970 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.311038 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.307544 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.311131 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.311241 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.311321 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.311395 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.311442 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.311519 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.311590 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.312010 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.312121 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.311632 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.312281 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.312387 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.312465 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.312503 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.312578 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.312644 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.312680 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.312746 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.312812 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.312848 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.312910 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.312946 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313013 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313078 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313113 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313233 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313273 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313339 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313373 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313437 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313500 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313540 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313603 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313642 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313708 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313783 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313817 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313885 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313956 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.313994 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.314069 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.314130 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.314167 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.314237 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.314305 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.314294 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.314342 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.314411 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.314480 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.314518 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.314607 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.314684 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.315792 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.307857 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.286038 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.286142 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.286152 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.286200 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.286597 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.286691 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.287636 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.286549 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.300094 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.301595 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.301896 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.301921 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.302119 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.302477 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.302669 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.316395 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.302986 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.303852 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.303912 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.303945 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.304943 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.305688 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.305689 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.306435 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.306446 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.306569 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.306655 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.307158 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.316424 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.316466 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.316676 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.316673 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.316769 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.316818 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.316893 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.316991 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.317238 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.317326 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.317556 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.317754 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.318125 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.318446 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.318541 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.318750 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.318812 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.319062 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.319224 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.319316 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.319392 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.319424 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.319521 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.319763 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.319962 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.319986 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.320424 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.320787 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.321559 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.321851 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.321961 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.322800 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.322814 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.323333 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.323417 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.323469 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.323520 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.323566 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.323606 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.323606 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.323651 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.323696 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.323926 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.324366 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.324428 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.325040 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.325197 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.325244 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.325297 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.325312 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.325609 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.325634 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.325659 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.325983 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.325998 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.326063 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.326476 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.327113 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.327421 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.327565 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.327683 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.327983 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.328237 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.328256 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.328443 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.328765 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.329255 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.329542 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.329680 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.329921 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.329966 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.330921 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.331445 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.331541 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.323829 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334015 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334024 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334266 4932 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334282 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334293 4932 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334307 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334320 4932 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.333833 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334331 4932 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334408 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334431 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334447 4932 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334468 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334462 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334509 4932 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334504 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334530 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.323983 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334572 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334625 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334653 4932 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334673 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334688 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334701 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334718 4932 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334734 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334753 4932 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334770 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334787 4932 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334809 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334829 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334852 4932 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334869 4932 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334886 4932 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334904 4932 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334923 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334949 4932 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334967 4932 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.334985 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335005 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335023 4932 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335040 4932 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335059 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335079 4932 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335097 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335487 4932 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335502 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335518 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335531 4932 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335544 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335559 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335576 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335590 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335603 4932 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335616 4932 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335633 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335647 4932 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335661 4932 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335678 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335692 4932 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335706 4932 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335721 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335739 4932 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335753 4932 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335767 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335779 4932 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335792 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335804 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335818 4932 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335830 4932 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335842 4932 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335855 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335869 4932 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335883 4932 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335897 4932 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335913 4932 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335924 4932 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335940 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335952 4932 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335964 4932 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335977 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.335989 4932 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336001 4932 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336015 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336034 4932 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336047 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336060 4932 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336075 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336119 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336133 4932 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336145 4932 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336157 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336188 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336202 4932 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336217 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336230 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336264 4932 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336281 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336299 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336316 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336335 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336354 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336372 4932 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336389 4932 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336402 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336415 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336428 4932 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336440 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336457 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336469 4932 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336481 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336496 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336509 4932 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336505 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336526 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336618 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336646 4932 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336670 4932 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336691 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336713 4932 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336733 4932 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336755 4932 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336778 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336799 4932 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336820 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336841 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336861 4932 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336880 4932 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.336900 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.338346 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.338691 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.339031 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.339269 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.339495 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.340423 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.341119 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.341428 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.341571 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.341761 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.341901 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.341822 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.342190 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.342232 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.342378 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.342453 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.342517 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.342688 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.342861 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.343078 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.343398 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.343874 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.344251 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.344486 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.344581 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.345649 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.346282 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.353026 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.353329 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.353765 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.353794 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.360810 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.360971 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.361054 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.363611 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.363656 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.363976 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.364348 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.365881 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.358363 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.366065 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.366202 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.366384 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.367430 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.367826 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.367988 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.368531 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.369820 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.371333 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.371518 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.371616 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.371689 4932 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203" exitCode=255 Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.371732 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203"} Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.372030 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.372345 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.372778 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.372771 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.372872 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.373346 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.373456 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.373585 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.374606 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.375296 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.375207 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.375718 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.376135 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.376430 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.384420 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.391696 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.395066 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.395822 4932 scope.go:117] "RemoveContainer" containerID="5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.403961 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.405742 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.419301 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.421043 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.424291 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.425438 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.430917 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437092 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437365 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437391 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437411 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437421 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437429 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437438 4932 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437449 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437459 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437468 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437478 4932 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437489 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437498 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437507 4932 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437516 4932 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437525 4932 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437533 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437542 4932 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437550 4932 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437559 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437567 4932 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437576 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437585 4932 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437594 4932 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437603 4932 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437615 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437625 4932 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437634 4932 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437644 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437653 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437662 4932 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437670 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437679 4932 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437687 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437697 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437706 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437715 4932 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437724 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437733 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437741 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437751 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437759 4932 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437767 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437777 4932 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437788 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437797 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437806 4932 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437815 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437825 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437833 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437843 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437851 4932 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437859 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437868 4932 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437877 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437885 4932 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437894 4932 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437902 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437911 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437920 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437929 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437940 4932 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437949 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437959 4932 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437969 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437977 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437986 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.437995 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.438005 4932 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.438014 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.438024 4932 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.438032 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.440596 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.440668 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.449722 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.457501 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: W0218 19:34:17.463781 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-47a93e424f270429cc3ea56dd41b3f224ad931ad27285afa8472ffece38f375b WatchSource:0}: Error finding container 47a93e424f270429cc3ea56dd41b3f224ad931ad27285afa8472ffece38f375b: Status 404 returned error can't find the container with id 47a93e424f270429cc3ea56dd41b3f224ad931ad27285afa8472ffece38f375b Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.475900 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.490913 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.510411 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.524514 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.538897 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.740497 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.740535 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.740558 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.740582 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.740677 4932 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.740739 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:18.740721621 +0000 UTC m=+22.322676466 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.740746 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.740750 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.740761 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.740777 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.740782 4932 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.740791 4932 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.740822 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:18.740810223 +0000 UTC m=+22.322765068 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.740686 4932 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.740839 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:18.740832773 +0000 UTC m=+22.322787618 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.740866 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:18.740850013 +0000 UTC m=+22.322804858 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:34:17 crc kubenswrapper[4932]: I0218 19:34:17.841388 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:34:17 crc kubenswrapper[4932]: E0218 19:34:17.841653 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:34:18.841603684 +0000 UTC m=+22.423558569 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.120922 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 07:49:54.69348219 +0000 UTC Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.178705 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:18 crc kubenswrapper[4932]: E0218 19:34:18.178850 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.379614 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.382922 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5"} Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.383345 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.385628 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"47a93e424f270429cc3ea56dd41b3f224ad931ad27285afa8472ffece38f375b"} Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.389366 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64"} Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.389437 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b"} Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.389460 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2cd2fca04fdb6eed057d0b9ccad0238d16ec7b43dc6b6798111340d0d78114c9"} Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.392245 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37"} Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.392281 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e2c7d37280f8d9292dac622ef6e34fc28791cd83fb2faf87fc669fbbd302e899"} Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.404929 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.420987 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.445319 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.467622 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.486658 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.508224 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.530254 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.553407 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.576613 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.598358 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.621079 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.643852 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.664797 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.682641 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.750597 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.750711 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.750751 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:18 crc kubenswrapper[4932]: E0218 19:34:18.750782 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:34:18 crc kubenswrapper[4932]: E0218 19:34:18.750818 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:34:18 crc kubenswrapper[4932]: E0218 19:34:18.750831 4932 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.750799 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:18 crc kubenswrapper[4932]: E0218 19:34:18.750891 4932 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:34:18 crc kubenswrapper[4932]: E0218 19:34:18.750963 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:20.750945962 +0000 UTC m=+24.332900797 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:18 crc kubenswrapper[4932]: E0218 19:34:18.750959 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:34:18 crc kubenswrapper[4932]: E0218 19:34:18.750995 4932 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:34:18 crc kubenswrapper[4932]: E0218 19:34:18.750981 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:20.750973843 +0000 UTC m=+24.332928688 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:34:18 crc kubenswrapper[4932]: E0218 19:34:18.751014 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:34:18 crc kubenswrapper[4932]: E0218 19:34:18.751111 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:20.751086075 +0000 UTC m=+24.333040950 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:34:18 crc kubenswrapper[4932]: E0218 19:34:18.751129 4932 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:18 crc kubenswrapper[4932]: E0218 19:34:18.751225 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:20.751204058 +0000 UTC m=+24.333158913 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:18 crc kubenswrapper[4932]: I0218 19:34:18.851754 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:34:18 crc kubenswrapper[4932]: E0218 19:34:18.851956 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:34:20.851922398 +0000 UTC m=+24.433877243 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.121311 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 00:17:03.566662577 +0000 UTC Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.178676 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.178727 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:19 crc kubenswrapper[4932]: E0218 19:34:19.178829 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:19 crc kubenswrapper[4932]: E0218 19:34:19.178992 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.183457 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.184371 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.185342 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.186139 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.186911 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.187572 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.188380 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.189148 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.189948 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.190693 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.191444 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.192463 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.193142 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.193868 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.196863 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.197732 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.198819 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.199631 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.200757 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.201922 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.202782 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.203610 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.204331 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.205372 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.205954 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.206922 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.210511 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.211198 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.211947 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.212778 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.213474 4932 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.213623 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.215473 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.216109 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.216798 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.219764 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.220816 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.221546 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.222221 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.222887 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.223401 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.223989 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.224698 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.225318 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.225765 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.226313 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.226859 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.228926 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.229811 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.233043 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.233814 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.234799 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.236621 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.237297 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.913594 4932 csr.go:261] certificate signing request csr-r8shz is approved, waiting to be issued Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.932548 4932 csr.go:257] certificate signing request csr-r8shz is issued Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.992233 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-jmmxw"] Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.992540 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jmmxw" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.992863 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-bz9kj"] Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.993223 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bz9kj" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.995134 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.995508 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.995677 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.995840 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.995949 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.996045 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 18 19:34:19 crc kubenswrapper[4932]: I0218 19:34:19.996137 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.008000 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.013545 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.016497 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.025111 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.035794 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.049382 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.062324 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.062806 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/45a22d6d-69dc-4c93-acd4-188dc6d1e315-serviceca\") pod \"node-ca-jmmxw\" (UID: \"45a22d6d-69dc-4c93-acd4-188dc6d1e315\") " pod="openshift-image-registry/node-ca-jmmxw" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.062868 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/45a22d6d-69dc-4c93-acd4-188dc6d1e315-host\") pod \"node-ca-jmmxw\" (UID: \"45a22d6d-69dc-4c93-acd4-188dc6d1e315\") " pod="openshift-image-registry/node-ca-jmmxw" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.062927 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb9jl\" (UniqueName: \"kubernetes.io/projected/4495ae98-57db-4409-87a7-56192683cc00-kube-api-access-wb9jl\") pod \"node-resolver-bz9kj\" (UID: \"4495ae98-57db-4409-87a7-56192683cc00\") " pod="openshift-dns/node-resolver-bz9kj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.062950 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnkr8\" (UniqueName: \"kubernetes.io/projected/45a22d6d-69dc-4c93-acd4-188dc6d1e315-kube-api-access-dnkr8\") pod \"node-ca-jmmxw\" (UID: \"45a22d6d-69dc-4c93-acd4-188dc6d1e315\") " pod="openshift-image-registry/node-ca-jmmxw" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.062980 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4495ae98-57db-4409-87a7-56192683cc00-hosts-file\") pod \"node-resolver-bz9kj\" (UID: \"4495ae98-57db-4409-87a7-56192683cc00\") " pod="openshift-dns/node-resolver-bz9kj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.075218 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.092297 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.098885 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.105077 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.120768 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.122992 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 10:17:06.133696671 +0000 UTC Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.134662 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.147530 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.161243 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.163454 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/45a22d6d-69dc-4c93-acd4-188dc6d1e315-serviceca\") pod \"node-ca-jmmxw\" (UID: \"45a22d6d-69dc-4c93-acd4-188dc6d1e315\") " pod="openshift-image-registry/node-ca-jmmxw" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.163480 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/45a22d6d-69dc-4c93-acd4-188dc6d1e315-host\") pod \"node-ca-jmmxw\" (UID: \"45a22d6d-69dc-4c93-acd4-188dc6d1e315\") " pod="openshift-image-registry/node-ca-jmmxw" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.163521 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb9jl\" (UniqueName: \"kubernetes.io/projected/4495ae98-57db-4409-87a7-56192683cc00-kube-api-access-wb9jl\") pod \"node-resolver-bz9kj\" (UID: \"4495ae98-57db-4409-87a7-56192683cc00\") " pod="openshift-dns/node-resolver-bz9kj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.163536 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnkr8\" (UniqueName: \"kubernetes.io/projected/45a22d6d-69dc-4c93-acd4-188dc6d1e315-kube-api-access-dnkr8\") pod \"node-ca-jmmxw\" (UID: \"45a22d6d-69dc-4c93-acd4-188dc6d1e315\") " pod="openshift-image-registry/node-ca-jmmxw" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.163553 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4495ae98-57db-4409-87a7-56192683cc00-hosts-file\") pod \"node-resolver-bz9kj\" (UID: \"4495ae98-57db-4409-87a7-56192683cc00\") " pod="openshift-dns/node-resolver-bz9kj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.163619 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4495ae98-57db-4409-87a7-56192683cc00-hosts-file\") pod \"node-resolver-bz9kj\" (UID: \"4495ae98-57db-4409-87a7-56192683cc00\") " pod="openshift-dns/node-resolver-bz9kj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.163918 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/45a22d6d-69dc-4c93-acd4-188dc6d1e315-host\") pod \"node-ca-jmmxw\" (UID: \"45a22d6d-69dc-4c93-acd4-188dc6d1e315\") " pod="openshift-image-registry/node-ca-jmmxw" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.166582 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/45a22d6d-69dc-4c93-acd4-188dc6d1e315-serviceca\") pod \"node-ca-jmmxw\" (UID: \"45a22d6d-69dc-4c93-acd4-188dc6d1e315\") " pod="openshift-image-registry/node-ca-jmmxw" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.176528 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.178194 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:20 crc kubenswrapper[4932]: E0218 19:34:20.178300 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.185200 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb9jl\" (UniqueName: \"kubernetes.io/projected/4495ae98-57db-4409-87a7-56192683cc00-kube-api-access-wb9jl\") pod \"node-resolver-bz9kj\" (UID: \"4495ae98-57db-4409-87a7-56192683cc00\") " pod="openshift-dns/node-resolver-bz9kj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.188965 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnkr8\" (UniqueName: \"kubernetes.io/projected/45a22d6d-69dc-4c93-acd4-188dc6d1e315-kube-api-access-dnkr8\") pod \"node-ca-jmmxw\" (UID: \"45a22d6d-69dc-4c93-acd4-188dc6d1e315\") " pod="openshift-image-registry/node-ca-jmmxw" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.203524 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.216455 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.232568 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.246366 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.267753 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.306921 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jmmxw" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.313866 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bz9kj" Feb 18 19:34:20 crc kubenswrapper[4932]: W0218 19:34:20.333601 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4495ae98_57db_4409_87a7_56192683cc00.slice/crio-fa6f6c9167bc7a3cadb3ff9688f45097f368b8478fbbe0a2db6365903a12ed00 WatchSource:0}: Error finding container fa6f6c9167bc7a3cadb3ff9688f45097f368b8478fbbe0a2db6365903a12ed00: Status 404 returned error can't find the container with id fa6f6c9167bc7a3cadb3ff9688f45097f368b8478fbbe0a2db6365903a12ed00 Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.408004 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bz9kj" event={"ID":"4495ae98-57db-4409-87a7-56192683cc00","Type":"ContainerStarted","Data":"fa6f6c9167bc7a3cadb3ff9688f45097f368b8478fbbe0a2db6365903a12ed00"} Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.417297 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jmmxw" event={"ID":"45a22d6d-69dc-4c93-acd4-188dc6d1e315","Type":"ContainerStarted","Data":"5c934bcaca0245db9ce20e13c22c18dad4eafacf7520b47c08ddae956032404d"} Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.769476 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.769528 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.769559 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.769595 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:20 crc kubenswrapper[4932]: E0218 19:34:20.769681 4932 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:34:20 crc kubenswrapper[4932]: E0218 19:34:20.769681 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:34:20 crc kubenswrapper[4932]: E0218 19:34:20.769707 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:34:20 crc kubenswrapper[4932]: E0218 19:34:20.769719 4932 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:20 crc kubenswrapper[4932]: E0218 19:34:20.769744 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:24.769727359 +0000 UTC m=+28.351682204 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:34:20 crc kubenswrapper[4932]: E0218 19:34:20.769761 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:24.76975394 +0000 UTC m=+28.351708785 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:20 crc kubenswrapper[4932]: E0218 19:34:20.769810 4932 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:34:20 crc kubenswrapper[4932]: E0218 19:34:20.769921 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:24.769897543 +0000 UTC m=+28.351852388 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:34:20 crc kubenswrapper[4932]: E0218 19:34:20.769823 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:34:20 crc kubenswrapper[4932]: E0218 19:34:20.769964 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:34:20 crc kubenswrapper[4932]: E0218 19:34:20.769979 4932 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:20 crc kubenswrapper[4932]: E0218 19:34:20.770016 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:24.770009575 +0000 UTC m=+28.351964420 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.819394 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-jf9v4"] Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.819805 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-sj8bg"] Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.820000 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.820067 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.824004 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hbqb5"] Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.824983 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.825011 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.825075 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.825083 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.825089 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.825481 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-z7nqj"] Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.825586 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.826116 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.826768 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.826776 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.826891 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.827419 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.827493 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.827684 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.828534 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.828556 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.828596 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.828597 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.828653 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.828757 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.829668 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.829796 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.845429 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870134 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870276 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9r7v\" (UniqueName: \"kubernetes.io/projected/c2740774-23d5-4857-9ac6-f0a01e38a64c-kube-api-access-g9r7v\") pod \"machine-config-daemon-jf9v4\" (UID: \"c2740774-23d5-4857-9ac6-f0a01e38a64c\") " pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870310 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-multus-cni-dir\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870362 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-systemd-units\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870384 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-node-log\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870404 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c2740774-23d5-4857-9ac6-f0a01e38a64c-rootfs\") pod \"machine-config-daemon-jf9v4\" (UID: \"c2740774-23d5-4857-9ac6-f0a01e38a64c\") " pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870423 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c2740774-23d5-4857-9ac6-f0a01e38a64c-proxy-tls\") pod \"machine-config-daemon-jf9v4\" (UID: \"c2740774-23d5-4857-9ac6-f0a01e38a64c\") " pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870442 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-multus-socket-dir-parent\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870465 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-multus-conf-dir\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870499 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-kubelet\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870530 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7bv7\" (UniqueName: \"kubernetes.io/projected/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-kube-api-access-j7bv7\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870585 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-ovnkube-config\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870606 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870627 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-ovn\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870647 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-env-overrides\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870669 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-openvswitch\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870691 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-os-release\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870710 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-cnibin\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870738 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/21e3c087-c564-4f66-a656-c92a4e47fa72-ovn-node-metrics-cert\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870781 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-var-lib-cni-multus\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870804 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-systemd\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870824 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-etc-openvswitch\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870846 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-cni-netd\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870868 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp7ht\" (UniqueName: \"kubernetes.io/projected/1b8d80e2-307e-43b6-9003-e77eef51e084-kube-api-access-lp7ht\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870889 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-slash\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870923 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-run-netns\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870948 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-var-lib-openvswitch\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870967 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-log-socket\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.870993 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-run-netns\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871041 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-tuning-conf-dir\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871062 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-run-k8s-cni-cncf-io\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871100 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-var-lib-kubelet\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871120 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-hostroot\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871141 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871170 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-ovnkube-script-lib\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871229 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-cnibin\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871249 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-run-ovn-kubernetes\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871278 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-cni-bin\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871298 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-cni-binary-copy\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871318 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-system-cni-dir\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871342 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1b8d80e2-307e-43b6-9003-e77eef51e084-cni-binary-copy\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871368 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-var-lib-cni-bin\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871397 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-etc-kubernetes\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871419 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c2740774-23d5-4857-9ac6-f0a01e38a64c-mcd-auth-proxy-config\") pod \"machine-config-daemon-jf9v4\" (UID: \"c2740774-23d5-4857-9ac6-f0a01e38a64c\") " pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871441 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1b8d80e2-307e-43b6-9003-e77eef51e084-multus-daemon-config\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871476 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-system-cni-dir\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871496 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-os-release\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871517 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-run-multus-certs\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.871537 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnfjd\" (UniqueName: \"kubernetes.io/projected/21e3c087-c564-4f66-a656-c92a4e47fa72-kube-api-access-xnfjd\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: E0218 19:34:20.871653 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:34:24.871634335 +0000 UTC m=+28.453589190 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.875494 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.897817 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.918543 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.932094 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.934091 4932 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-18 19:29:19 +0000 UTC, rotation deadline is 2026-11-09 16:10:47.839264864 +0000 UTC Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.934116 4932 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6332h36m26.905152099s for next certificate rotation Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.947385 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.957684 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972087 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-systemd\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972125 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-etc-openvswitch\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972143 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-cni-netd\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972159 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp7ht\" (UniqueName: \"kubernetes.io/projected/1b8d80e2-307e-43b6-9003-e77eef51e084-kube-api-access-lp7ht\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972193 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-slash\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972211 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-run-netns\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972227 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-var-lib-openvswitch\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972241 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-log-socket\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972241 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-systemd\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972273 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-etc-openvswitch\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972307 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-cni-netd\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972318 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-run-netns\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972288 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-run-netns\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972354 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-log-socket\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972257 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-run-netns\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972357 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-var-lib-openvswitch\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972393 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-slash\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972426 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-tuning-conf-dir\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972446 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-run-k8s-cni-cncf-io\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972461 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-var-lib-kubelet\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972477 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-hostroot\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972495 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972512 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-ovnkube-script-lib\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972529 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-cnibin\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972544 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-run-ovn-kubernetes\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972559 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-cni-bin\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972575 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-cni-binary-copy\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972592 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-system-cni-dir\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972609 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1b8d80e2-307e-43b6-9003-e77eef51e084-cni-binary-copy\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972617 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972625 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-var-lib-cni-bin\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972643 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-etc-kubernetes\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972663 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c2740774-23d5-4857-9ac6-f0a01e38a64c-mcd-auth-proxy-config\") pod \"machine-config-daemon-jf9v4\" (UID: \"c2740774-23d5-4857-9ac6-f0a01e38a64c\") " pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972686 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1b8d80e2-307e-43b6-9003-e77eef51e084-multus-daemon-config\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972704 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-system-cni-dir\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972719 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-os-release\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972733 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-run-multus-certs\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972748 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnfjd\" (UniqueName: \"kubernetes.io/projected/21e3c087-c564-4f66-a656-c92a4e47fa72-kube-api-access-xnfjd\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972762 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9r7v\" (UniqueName: \"kubernetes.io/projected/c2740774-23d5-4857-9ac6-f0a01e38a64c-kube-api-access-g9r7v\") pod \"machine-config-daemon-jf9v4\" (UID: \"c2740774-23d5-4857-9ac6-f0a01e38a64c\") " pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972777 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-multus-cni-dir\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972802 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-systemd-units\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972816 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-node-log\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972830 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c2740774-23d5-4857-9ac6-f0a01e38a64c-rootfs\") pod \"machine-config-daemon-jf9v4\" (UID: \"c2740774-23d5-4857-9ac6-f0a01e38a64c\") " pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972845 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c2740774-23d5-4857-9ac6-f0a01e38a64c-proxy-tls\") pod \"machine-config-daemon-jf9v4\" (UID: \"c2740774-23d5-4857-9ac6-f0a01e38a64c\") " pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972861 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-multus-socket-dir-parent\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972870 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-var-lib-kubelet\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972876 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-multus-conf-dir\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972902 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-multus-conf-dir\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972910 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-kubelet\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972935 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7bv7\" (UniqueName: \"kubernetes.io/projected/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-kube-api-access-j7bv7\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972952 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-ovnkube-config\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972967 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972983 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-ovn\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972996 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-env-overrides\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973012 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-openvswitch\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973072 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-os-release\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973088 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-cnibin\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973103 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/21e3c087-c564-4f66-a656-c92a4e47fa72-ovn-node-metrics-cert\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973118 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-var-lib-cni-multus\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973163 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-var-lib-cni-multus\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973203 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-run-k8s-cni-cncf-io\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973222 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-kubelet\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973348 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-hostroot\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973471 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-openvswitch\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973437 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-tuning-conf-dir\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973543 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-system-cni-dir\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973583 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-cnibin\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973617 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-ovnkube-script-lib\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973623 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-run-ovn-kubernetes\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973661 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-cni-bin\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973763 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-os-release\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973859 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-os-release\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.972822 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.973974 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-env-overrides\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.974212 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-multus-socket-dir-parent\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.974371 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-cni-binary-copy\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.974398 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-multus-cni-dir\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.974401 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-run-multus-certs\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.974421 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-ovn\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.974443 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-node-log\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.974448 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-host-var-lib-cni-bin\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.974480 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-system-cni-dir\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.974467 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-systemd-units\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.974521 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c2740774-23d5-4857-9ac6-f0a01e38a64c-rootfs\") pod \"machine-config-daemon-jf9v4\" (UID: \"c2740774-23d5-4857-9ac6-f0a01e38a64c\") " pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.974532 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-ovnkube-config\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.974769 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.974774 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1b8d80e2-307e-43b6-9003-e77eef51e084-multus-daemon-config\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.974795 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-etc-kubernetes\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.974827 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1b8d80e2-307e-43b6-9003-e77eef51e084-cnibin\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.975004 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1b8d80e2-307e-43b6-9003-e77eef51e084-cni-binary-copy\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.975350 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c2740774-23d5-4857-9ac6-f0a01e38a64c-mcd-auth-proxy-config\") pod \"machine-config-daemon-jf9v4\" (UID: \"c2740774-23d5-4857-9ac6-f0a01e38a64c\") " pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.979711 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/21e3c087-c564-4f66-a656-c92a4e47fa72-ovn-node-metrics-cert\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.981665 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c2740774-23d5-4857-9ac6-f0a01e38a64c-proxy-tls\") pod \"machine-config-daemon-jf9v4\" (UID: \"c2740774-23d5-4857-9ac6-f0a01e38a64c\") " pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.988641 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:20Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.994014 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9r7v\" (UniqueName: \"kubernetes.io/projected/c2740774-23d5-4857-9ac6-f0a01e38a64c-kube-api-access-g9r7v\") pod \"machine-config-daemon-jf9v4\" (UID: \"c2740774-23d5-4857-9ac6-f0a01e38a64c\") " pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.994713 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp7ht\" (UniqueName: \"kubernetes.io/projected/1b8d80e2-307e-43b6-9003-e77eef51e084-kube-api-access-lp7ht\") pod \"multus-sj8bg\" (UID: \"1b8d80e2-307e-43b6-9003-e77eef51e084\") " pod="openshift-multus/multus-sj8bg" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.997053 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnfjd\" (UniqueName: \"kubernetes.io/projected/21e3c087-c564-4f66-a656-c92a4e47fa72-kube-api-access-xnfjd\") pod \"ovnkube-node-hbqb5\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:20 crc kubenswrapper[4932]: I0218 19:34:20.998120 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7bv7\" (UniqueName: \"kubernetes.io/projected/e77eb8d5-cd29-49ef-9080-4cb12d3afa09-kube-api-access-j7bv7\") pod \"multus-additional-cni-plugins-z7nqj\" (UID: \"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\") " pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.003144 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.015146 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.028081 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.042194 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.057697 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.072909 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.089015 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.111394 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.124254 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 00:27:54.903012395 +0000 UTC Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.128206 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.137234 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.141557 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.144912 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-sj8bg" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.152552 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.158884 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.159262 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" Feb 18 19:34:21 crc kubenswrapper[4932]: W0218 19:34:21.162663 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b8d80e2_307e_43b6_9003_e77eef51e084.slice/crio-ad82d7c7e3d13edc7c6c58889152a96019b0d2f44dc6850ffc0f02270eae2a47 WatchSource:0}: Error finding container ad82d7c7e3d13edc7c6c58889152a96019b0d2f44dc6850ffc0f02270eae2a47: Status 404 returned error can't find the container with id ad82d7c7e3d13edc7c6c58889152a96019b0d2f44dc6850ffc0f02270eae2a47 Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.170016 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: W0218 19:34:21.170609 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21e3c087_c564_4f66_a656_c92a4e47fa72.slice/crio-1becd3aaad487cc81da8ef3a1202626206425a186289801483cd534c986b4c0d WatchSource:0}: Error finding container 1becd3aaad487cc81da8ef3a1202626206425a186289801483cd534c986b4c0d: Status 404 returned error can't find the container with id 1becd3aaad487cc81da8ef3a1202626206425a186289801483cd534c986b4c0d Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.178493 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.178559 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:21 crc kubenswrapper[4932]: E0218 19:34:21.178623 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:21 crc kubenswrapper[4932]: E0218 19:34:21.178707 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.181524 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.195320 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.205009 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.218631 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.230705 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.421258 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sj8bg" event={"ID":"1b8d80e2-307e-43b6-9003-e77eef51e084","Type":"ContainerStarted","Data":"e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7"} Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.421648 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sj8bg" event={"ID":"1b8d80e2-307e-43b6-9003-e77eef51e084","Type":"ContainerStarted","Data":"ad82d7c7e3d13edc7c6c58889152a96019b0d2f44dc6850ffc0f02270eae2a47"} Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.422666 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e"} Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.422704 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"6673604ef23c936990fd3a8cd5650ce53797b3756c5d09c8a2d50e5da9e76dc9"} Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.423613 4932 generic.go:334] "Generic (PLEG): container finished" podID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerID="4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159" exitCode=0 Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.423661 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerDied","Data":"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159"} Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.423678 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerStarted","Data":"1becd3aaad487cc81da8ef3a1202626206425a186289801483cd534c986b4c0d"} Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.430941 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff"} Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.433624 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jmmxw" event={"ID":"45a22d6d-69dc-4c93-acd4-188dc6d1e315","Type":"ContainerStarted","Data":"73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155"} Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.437652 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" event={"ID":"e77eb8d5-cd29-49ef-9080-4cb12d3afa09","Type":"ContainerStarted","Data":"44d780ab0506509c5aaeb1e360d306fcb01135fed3f85c63db86c122cb10c676"} Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.438435 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.438936 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bz9kj" event={"ID":"4495ae98-57db-4409-87a7-56192683cc00","Type":"ContainerStarted","Data":"9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec"} Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.455750 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.473324 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.497607 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.509536 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.525750 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.538844 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.552957 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.566689 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.585755 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.601717 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.615644 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.629430 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.642309 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.659403 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.673726 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.690574 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.700460 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.712684 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.724902 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.737847 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.751941 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.765350 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.791785 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.805321 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.819816 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.835167 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:21 crc kubenswrapper[4932]: I0218 19:34:21.848104 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.125400 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 12:52:03.014328623 +0000 UTC Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.178267 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:22 crc kubenswrapper[4932]: E0218 19:34:22.178420 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.446383 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerStarted","Data":"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf"} Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.446768 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerStarted","Data":"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2"} Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.446787 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerStarted","Data":"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0"} Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.446800 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerStarted","Data":"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06"} Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.446810 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerStarted","Data":"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5"} Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.446821 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerStarted","Data":"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545"} Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.447946 4932 generic.go:334] "Generic (PLEG): container finished" podID="e77eb8d5-cd29-49ef-9080-4cb12d3afa09" containerID="eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06" exitCode=0 Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.448038 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" event={"ID":"e77eb8d5-cd29-49ef-9080-4cb12d3afa09","Type":"ContainerDied","Data":"eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06"} Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.449733 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849"} Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.466823 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.485488 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.502080 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.514748 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.534146 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.552096 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.567516 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.568445 4932 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.572785 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.572849 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.572863 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.572992 4932 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.579887 4932 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.580353 4932 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.580328 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.582049 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.582099 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.582116 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.582141 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.582159 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:22Z","lastTransitionTime":"2026-02-18T19:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.594846 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: E0218 19:34:22.611868 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.616187 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.616648 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.616684 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.616696 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.616714 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.617033 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:22Z","lastTransitionTime":"2026-02-18T19:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.632036 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: E0218 19:34:22.632744 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.636819 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.636860 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.636871 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.636887 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.636900 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:22Z","lastTransitionTime":"2026-02-18T19:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.650496 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: E0218 19:34:22.652134 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.664629 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.664694 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.664712 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.664740 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.664758 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:22Z","lastTransitionTime":"2026-02-18T19:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.671221 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: E0218 19:34:22.683374 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.687487 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.687525 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.687536 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.687555 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.687567 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:22Z","lastTransitionTime":"2026-02-18T19:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.688347 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: E0218 19:34:22.699354 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: E0218 19:34:22.699522 4932 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.701727 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.701802 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.701816 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.701834 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.701846 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:22Z","lastTransitionTime":"2026-02-18T19:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.702975 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.718453 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.733735 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.749836 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.773040 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.790201 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.804617 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.804662 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.804673 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.804690 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.804701 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:22Z","lastTransitionTime":"2026-02-18T19:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.810515 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.828142 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.843148 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.857908 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.872064 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.890483 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.907502 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.907974 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.908025 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.908045 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.908055 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:22Z","lastTransitionTime":"2026-02-18T19:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.913859 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:22 crc kubenswrapper[4932]: I0218 19:34:22.930931 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:22Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.011502 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.011596 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.011616 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.011646 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.011665 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:23Z","lastTransitionTime":"2026-02-18T19:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.117070 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.117167 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.117235 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.117271 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.117298 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:23Z","lastTransitionTime":"2026-02-18T19:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.126305 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 22:17:20.872520739 +0000 UTC Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.178951 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.179028 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:23 crc kubenswrapper[4932]: E0218 19:34:23.179214 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:23 crc kubenswrapper[4932]: E0218 19:34:23.179355 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.220209 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.220264 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.220277 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.220296 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.220312 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:23Z","lastTransitionTime":"2026-02-18T19:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.323708 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.323776 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.323795 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.323821 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.323841 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:23Z","lastTransitionTime":"2026-02-18T19:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.426638 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.426697 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.426716 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.426742 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.426764 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:23Z","lastTransitionTime":"2026-02-18T19:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.458539 4932 generic.go:334] "Generic (PLEG): container finished" podID="e77eb8d5-cd29-49ef-9080-4cb12d3afa09" containerID="017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78" exitCode=0 Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.458631 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" event={"ID":"e77eb8d5-cd29-49ef-9080-4cb12d3afa09","Type":"ContainerDied","Data":"017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78"} Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.485034 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:23Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.513134 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:23Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.531886 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.531970 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.531994 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.532032 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.532058 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:23Z","lastTransitionTime":"2026-02-18T19:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.536529 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:23Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.564101 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:23Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.593787 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:23Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.608908 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:23Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.631014 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:23Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.635077 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.635118 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.635126 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.635142 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.635152 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:23Z","lastTransitionTime":"2026-02-18T19:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.649228 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:23Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.659384 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:23Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.673824 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:23Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.684400 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:23Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.697918 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:23Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.710017 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:23Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.722421 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:23Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.737930 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.737970 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.737983 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.738002 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.738017 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:23Z","lastTransitionTime":"2026-02-18T19:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.840972 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.841044 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.841071 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.841105 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.841127 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:23Z","lastTransitionTime":"2026-02-18T19:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.943916 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.943975 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.943992 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.944016 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:23 crc kubenswrapper[4932]: I0218 19:34:23.944034 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:23Z","lastTransitionTime":"2026-02-18T19:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.047775 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.047837 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.047854 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.047882 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.047899 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:24Z","lastTransitionTime":"2026-02-18T19:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.127342 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 01:08:32.346828049 +0000 UTC Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.150978 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.151024 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.151034 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.151053 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.151062 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:24Z","lastTransitionTime":"2026-02-18T19:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.178747 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:24 crc kubenswrapper[4932]: E0218 19:34:24.178934 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.254150 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.254265 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.254290 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.254321 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.254347 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:24Z","lastTransitionTime":"2026-02-18T19:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.357655 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.358215 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.358241 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.358266 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.358283 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:24Z","lastTransitionTime":"2026-02-18T19:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.461240 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.461295 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.461314 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.461340 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.461359 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:24Z","lastTransitionTime":"2026-02-18T19:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.465866 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerStarted","Data":"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f"} Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.468472 4932 generic.go:334] "Generic (PLEG): container finished" podID="e77eb8d5-cd29-49ef-9080-4cb12d3afa09" containerID="600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699" exitCode=0 Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.468562 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" event={"ID":"e77eb8d5-cd29-49ef-9080-4cb12d3afa09","Type":"ContainerDied","Data":"600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699"} Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.491431 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.504845 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.516903 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.527783 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.540866 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.558694 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.563487 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.563510 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.563518 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.563532 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.563541 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:24Z","lastTransitionTime":"2026-02-18T19:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.573359 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.590770 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.606412 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.629133 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.647859 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.662323 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.665854 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.665890 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.665903 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.665923 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.665938 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:24Z","lastTransitionTime":"2026-02-18T19:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.679201 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.694504 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.768695 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.768781 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.768793 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.768809 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.768819 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:24Z","lastTransitionTime":"2026-02-18T19:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.822973 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.823025 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.823077 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.823105 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:24 crc kubenswrapper[4932]: E0218 19:34:24.823247 4932 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:34:24 crc kubenswrapper[4932]: E0218 19:34:24.823337 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:34:24 crc kubenswrapper[4932]: E0218 19:34:24.823360 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:34:24 crc kubenswrapper[4932]: E0218 19:34:24.823374 4932 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:24 crc kubenswrapper[4932]: E0218 19:34:24.823435 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:32.823372024 +0000 UTC m=+36.405326909 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:34:24 crc kubenswrapper[4932]: E0218 19:34:24.823460 4932 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:34:24 crc kubenswrapper[4932]: E0218 19:34:24.823285 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:34:24 crc kubenswrapper[4932]: E0218 19:34:24.823509 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:34:24 crc kubenswrapper[4932]: E0218 19:34:24.823519 4932 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:24 crc kubenswrapper[4932]: E0218 19:34:24.823491 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:32.823467066 +0000 UTC m=+36.405422011 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:24 crc kubenswrapper[4932]: E0218 19:34:24.823566 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:32.823550218 +0000 UTC m=+36.405505073 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:34:24 crc kubenswrapper[4932]: E0218 19:34:24.823581 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:32.823573178 +0000 UTC m=+36.405528033 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.871815 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.871854 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.871867 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.871893 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.871917 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:24Z","lastTransitionTime":"2026-02-18T19:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.924376 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:34:24 crc kubenswrapper[4932]: E0218 19:34:24.924612 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:34:32.924591564 +0000 UTC m=+36.506546419 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.975382 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.975411 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.975421 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.975436 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:24 crc kubenswrapper[4932]: I0218 19:34:24.975447 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:24Z","lastTransitionTime":"2026-02-18T19:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.078944 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.079673 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.079694 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.079719 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.079735 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:25Z","lastTransitionTime":"2026-02-18T19:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.127720 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 13:06:06.069813304 +0000 UTC Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.182407 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:25 crc kubenswrapper[4932]: E0218 19:34:25.182572 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.183820 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:25 crc kubenswrapper[4932]: E0218 19:34:25.183944 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.189500 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.189549 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.189566 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.189586 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.189604 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:25Z","lastTransitionTime":"2026-02-18T19:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.292203 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.292265 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.292283 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.292309 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.292330 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:25Z","lastTransitionTime":"2026-02-18T19:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.395431 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.395584 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.395604 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.395632 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.395649 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:25Z","lastTransitionTime":"2026-02-18T19:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.477559 4932 generic.go:334] "Generic (PLEG): container finished" podID="e77eb8d5-cd29-49ef-9080-4cb12d3afa09" containerID="c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218" exitCode=0 Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.477642 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" event={"ID":"e77eb8d5-cd29-49ef-9080-4cb12d3afa09","Type":"ContainerDied","Data":"c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218"} Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.492999 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:25Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.497933 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.498003 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.498021 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.498048 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.498066 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:25Z","lastTransitionTime":"2026-02-18T19:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.513258 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:25Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.528484 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:25Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.546801 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:25Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.567037 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:25Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.584704 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:25Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.601198 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.601233 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.601247 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.601266 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.601277 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:25Z","lastTransitionTime":"2026-02-18T19:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.608810 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:25Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.629275 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:25Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.645589 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:25Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.668007 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:25Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.686477 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:25Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.703341 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.703370 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.703382 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.703401 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.703413 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:25Z","lastTransitionTime":"2026-02-18T19:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.706528 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:25Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.723453 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:25Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.736429 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:25Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.806145 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.806262 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.806289 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.806321 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.806355 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:25Z","lastTransitionTime":"2026-02-18T19:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.909462 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.909541 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.909565 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.909595 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:25 crc kubenswrapper[4932]: I0218 19:34:25.909616 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:25Z","lastTransitionTime":"2026-02-18T19:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.013099 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.013212 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.013238 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.013267 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.013287 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:26Z","lastTransitionTime":"2026-02-18T19:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.116403 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.116471 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.116488 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.116515 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.116534 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:26Z","lastTransitionTime":"2026-02-18T19:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.128331 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 06:55:16.488407937 +0000 UTC Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.184908 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:26 crc kubenswrapper[4932]: E0218 19:34:26.185208 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.219748 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.219829 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.219854 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.219885 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.219907 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:26Z","lastTransitionTime":"2026-02-18T19:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.324510 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.324575 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.324593 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.324617 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.324634 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:26Z","lastTransitionTime":"2026-02-18T19:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.427855 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.427915 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.427932 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.427954 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.427973 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:26Z","lastTransitionTime":"2026-02-18T19:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.486561 4932 generic.go:334] "Generic (PLEG): container finished" podID="e77eb8d5-cd29-49ef-9080-4cb12d3afa09" containerID="fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a" exitCode=0 Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.486784 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" event={"ID":"e77eb8d5-cd29-49ef-9080-4cb12d3afa09","Type":"ContainerDied","Data":"fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a"} Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.508698 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.531202 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.531364 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.531375 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.531394 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.531407 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:26Z","lastTransitionTime":"2026-02-18T19:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.547526 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.569265 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.591392 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.606434 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.622143 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.634686 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.634740 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.634760 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.634790 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.634811 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:26Z","lastTransitionTime":"2026-02-18T19:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.639558 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.657213 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.671957 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.686508 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.708803 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.731441 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.740070 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.740323 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.740454 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.740578 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.740700 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:26Z","lastTransitionTime":"2026-02-18T19:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.759359 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.799121 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:26Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.843255 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.843290 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.843300 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.843314 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.843323 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:26Z","lastTransitionTime":"2026-02-18T19:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.938471 4932 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.957650 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.957709 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.957728 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.957754 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:26 crc kubenswrapper[4932]: I0218 19:34:26.957778 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:26Z","lastTransitionTime":"2026-02-18T19:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.061142 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.061255 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.061269 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.061296 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.061340 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:27Z","lastTransitionTime":"2026-02-18T19:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.128948 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 06:41:27.835478124 +0000 UTC Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.165821 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.165914 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.165966 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.165993 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.166011 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:27Z","lastTransitionTime":"2026-02-18T19:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.178424 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.178467 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:27 crc kubenswrapper[4932]: E0218 19:34:27.178695 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:27 crc kubenswrapper[4932]: E0218 19:34:27.180264 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.196343 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.234942 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.254612 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.269778 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.269843 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.269861 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.269886 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.269905 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:27Z","lastTransitionTime":"2026-02-18T19:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.276728 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.303370 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.324234 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.339699 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.360775 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.374877 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.374957 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.374982 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.375010 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.375038 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:27Z","lastTransitionTime":"2026-02-18T19:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.380358 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.394502 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.411038 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.429023 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.444283 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.465471 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.477460 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.477504 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.477513 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.477531 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.477543 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:27Z","lastTransitionTime":"2026-02-18T19:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.494727 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerStarted","Data":"ef1025273d1a52fcc05cc010942139a00bd9ac7b3adbd346e088c7feb903d009"} Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.495343 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.495386 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.501042 4932 generic.go:334] "Generic (PLEG): container finished" podID="e77eb8d5-cd29-49ef-9080-4cb12d3afa09" containerID="7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf" exitCode=0 Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.501111 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" event={"ID":"e77eb8d5-cd29-49ef-9080-4cb12d3afa09","Type":"ContainerDied","Data":"7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf"} Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.511641 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.566777 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.574302 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.576159 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.579646 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.579701 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.579720 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.579742 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.579762 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:27Z","lastTransitionTime":"2026-02-18T19:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.592064 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.606900 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.623862 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.637612 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.651231 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.668112 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.682295 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.682339 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.682350 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.682365 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.682375 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:27Z","lastTransitionTime":"2026-02-18T19:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.686011 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.704356 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.717644 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.732856 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.746431 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.769742 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1025273d1a52fcc05cc010942139a00bd9ac7b3adbd346e088c7feb903d009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.785124 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.785209 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.785233 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.785263 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.785287 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:27Z","lastTransitionTime":"2026-02-18T19:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.785591 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.803721 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.817852 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.832968 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.846824 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.859592 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.875336 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.889741 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.889791 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.889805 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.889826 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.889841 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:27Z","lastTransitionTime":"2026-02-18T19:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.890511 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.907951 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.921669 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.937651 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.952733 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.970926 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.993580 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.993628 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.993642 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.993663 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.993678 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:27Z","lastTransitionTime":"2026-02-18T19:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:27 crc kubenswrapper[4932]: I0218 19:34:27.997575 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1025273d1a52fcc05cc010942139a00bd9ac7b3adbd346e088c7feb903d009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.096762 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.096838 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.096880 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.096915 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.096942 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:28Z","lastTransitionTime":"2026-02-18T19:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.129796 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 02:08:15.385505321 +0000 UTC Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.178883 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:28 crc kubenswrapper[4932]: E0218 19:34:28.179185 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.200080 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.200160 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.200180 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.200237 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.200254 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:28Z","lastTransitionTime":"2026-02-18T19:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.303499 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.303572 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.303590 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.303616 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.303636 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:28Z","lastTransitionTime":"2026-02-18T19:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.407487 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.407548 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.407565 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.407588 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.407604 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:28Z","lastTransitionTime":"2026-02-18T19:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.509207 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.509249 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.509263 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.509281 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.509293 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:28Z","lastTransitionTime":"2026-02-18T19:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.510400 4932 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.510435 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" event={"ID":"e77eb8d5-cd29-49ef-9080-4cb12d3afa09","Type":"ContainerStarted","Data":"7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7"} Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.533101 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.550390 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.566528 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.580686 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.583442 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.601702 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.611996 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.612048 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.612063 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.612087 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.612104 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:28Z","lastTransitionTime":"2026-02-18T19:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.619653 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.635399 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.652180 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.670404 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.692362 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1025273d1a52fcc05cc010942139a00bd9ac7b3adbd346e088c7feb903d009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.712072 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.715115 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.715198 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.715213 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.715231 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.715302 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:28Z","lastTransitionTime":"2026-02-18T19:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.731050 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.754559 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.769131 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.782500 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.803294 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.817168 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.818661 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.818745 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.818765 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.818797 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.818825 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:28Z","lastTransitionTime":"2026-02-18T19:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.840503 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.871363 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1025273d1a52fcc05cc010942139a00bd9ac7b3adbd346e088c7feb903d009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.891506 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.909846 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.922001 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.922043 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.922061 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.922088 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.922106 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:28Z","lastTransitionTime":"2026-02-18T19:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.930162 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.951752 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.967463 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:28 crc kubenswrapper[4932]: I0218 19:34:28.986872 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.010060 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.024536 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.024573 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.024582 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.024598 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.024606 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:29Z","lastTransitionTime":"2026-02-18T19:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.028642 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.045432 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.126920 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.126962 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.126971 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.126985 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.126995 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:29Z","lastTransitionTime":"2026-02-18T19:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.130079 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 12:04:14.026352848 +0000 UTC Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.178548 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:29 crc kubenswrapper[4932]: E0218 19:34:29.178664 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.178515 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:29 crc kubenswrapper[4932]: E0218 19:34:29.179323 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.229521 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.229581 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.229600 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.229627 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.229652 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:29Z","lastTransitionTime":"2026-02-18T19:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.333024 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.333073 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.333092 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.333114 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.333132 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:29Z","lastTransitionTime":"2026-02-18T19:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.436238 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.436321 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.436342 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.436372 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.436392 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:29Z","lastTransitionTime":"2026-02-18T19:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.513856 4932 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.539228 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.539320 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.539341 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.539373 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.539393 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:29Z","lastTransitionTime":"2026-02-18T19:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.642651 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.642717 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.642735 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.642766 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.642790 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:29Z","lastTransitionTime":"2026-02-18T19:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.747634 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.747679 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.747690 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.747705 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.747717 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:29Z","lastTransitionTime":"2026-02-18T19:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.851046 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.851119 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.851136 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.851164 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.851214 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:29Z","lastTransitionTime":"2026-02-18T19:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.955331 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.955383 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.955398 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.955419 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:29 crc kubenswrapper[4932]: I0218 19:34:29.955432 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:29Z","lastTransitionTime":"2026-02-18T19:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.058573 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.058631 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.058648 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.058672 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.058691 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:30Z","lastTransitionTime":"2026-02-18T19:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.130985 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 06:46:23.602733 +0000 UTC Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.162889 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.162952 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.162971 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.163000 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.163021 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:30Z","lastTransitionTime":"2026-02-18T19:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.179148 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:30 crc kubenswrapper[4932]: E0218 19:34:30.179368 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.266884 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.267478 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.267934 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.268469 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.268935 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:30Z","lastTransitionTime":"2026-02-18T19:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.372433 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.372490 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.372508 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.372532 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.372551 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:30Z","lastTransitionTime":"2026-02-18T19:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.475828 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.475886 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.475903 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.475933 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.475955 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:30Z","lastTransitionTime":"2026-02-18T19:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.521318 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovnkube-controller/0.log" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.525123 4932 generic.go:334] "Generic (PLEG): container finished" podID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerID="ef1025273d1a52fcc05cc010942139a00bd9ac7b3adbd346e088c7feb903d009" exitCode=1 Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.525242 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerDied","Data":"ef1025273d1a52fcc05cc010942139a00bd9ac7b3adbd346e088c7feb903d009"} Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.526669 4932 scope.go:117] "RemoveContainer" containerID="ef1025273d1a52fcc05cc010942139a00bd9ac7b3adbd346e088c7feb903d009" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.552123 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.573461 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.579434 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.579530 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.579553 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.579624 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.579646 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:30Z","lastTransitionTime":"2026-02-18T19:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.590580 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.611626 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.635819 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.658722 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.680225 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.683138 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.683222 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.683240 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.683273 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.683291 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:30Z","lastTransitionTime":"2026-02-18T19:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.698113 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.716780 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.750952 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef1025273d1a52fcc05cc010942139a00bd9ac7b3adbd346e088c7feb903d009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef1025273d1a52fcc05cc010942139a00bd9ac7b3adbd346e088c7feb903d009\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:29Z\\\",\\\"message\\\":\\\"o/informers/factory.go:160\\\\nI0218 19:34:29.827366 6197 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:34:29.827859 6197 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:34:29.827926 6197 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:34:29.827978 6197 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:34:29.828543 6197 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 19:34:29.828558 6197 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 19:34:29.828579 6197 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:34:29.828594 6197 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:34:29.828604 6197 factory.go:656] Stopping watch factory\\\\nI0218 19:34:29.828615 6197 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 19:34:29.828628 6197 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.773086 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.785993 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.786079 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.786100 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.786133 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.786154 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:30Z","lastTransitionTime":"2026-02-18T19:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.793800 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.812997 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.824696 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.895665 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.895728 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.895742 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.895773 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:30 crc kubenswrapper[4932]: I0218 19:34:30.895787 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:30Z","lastTransitionTime":"2026-02-18T19:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.005528 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.005654 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.005684 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.005724 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.005758 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:31Z","lastTransitionTime":"2026-02-18T19:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.108554 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.108617 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.108627 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.108662 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.108677 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:31Z","lastTransitionTime":"2026-02-18T19:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.131662 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 12:09:59.479127916 +0000 UTC Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.179854 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.179903 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:31 crc kubenswrapper[4932]: E0218 19:34:31.180033 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:31 crc kubenswrapper[4932]: E0218 19:34:31.180144 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.211508 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.211542 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.211551 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.211564 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.211572 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:31Z","lastTransitionTime":"2026-02-18T19:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.313759 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.313797 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.313810 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.313826 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.313839 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:31Z","lastTransitionTime":"2026-02-18T19:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.416991 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.417025 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.417033 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.417047 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.417058 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:31Z","lastTransitionTime":"2026-02-18T19:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.519722 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.519755 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.519764 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.519779 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.519788 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:31Z","lastTransitionTime":"2026-02-18T19:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.533426 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovnkube-controller/0.log" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.537026 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerStarted","Data":"e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4"} Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.537161 4932 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.561441 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.585442 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.609813 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.621938 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.621962 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.621970 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.621984 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.622002 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:31Z","lastTransitionTime":"2026-02-18T19:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.630093 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.654265 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.673808 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.689978 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.704600 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.722802 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.724763 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.724824 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.724843 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.724867 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.724886 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:31Z","lastTransitionTime":"2026-02-18T19:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.741409 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.763826 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.783619 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.803038 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.825299 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef1025273d1a52fcc05cc010942139a00bd9ac7b3adbd346e088c7feb903d009\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:29Z\\\",\\\"message\\\":\\\"o/informers/factory.go:160\\\\nI0218 19:34:29.827366 6197 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:34:29.827859 6197 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:34:29.827926 6197 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:34:29.827978 6197 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:34:29.828543 6197 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 19:34:29.828558 6197 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 19:34:29.828579 6197 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:34:29.828594 6197 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:34:29.828604 6197 factory.go:656] Stopping watch factory\\\\nI0218 19:34:29.828615 6197 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 19:34:29.828628 6197 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.827680 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.827739 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.827758 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.827783 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.827804 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:31Z","lastTransitionTime":"2026-02-18T19:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.931339 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.931424 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.931442 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.931484 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:31 crc kubenswrapper[4932]: I0218 19:34:31.931504 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:31Z","lastTransitionTime":"2026-02-18T19:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.034921 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.035023 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.035048 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.035080 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.035106 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:32Z","lastTransitionTime":"2026-02-18T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.132379 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 02:15:15.051406651 +0000 UTC Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.137842 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.137897 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.137925 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.137955 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.137974 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:32Z","lastTransitionTime":"2026-02-18T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.178767 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.178967 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.241315 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.241384 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.241411 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.241508 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.241536 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:32Z","lastTransitionTime":"2026-02-18T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.344748 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.344835 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.344859 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.344891 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.344917 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:32Z","lastTransitionTime":"2026-02-18T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.448513 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.448580 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.448599 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.448626 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.448644 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:32Z","lastTransitionTime":"2026-02-18T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.544282 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovnkube-controller/1.log" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.545325 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovnkube-controller/0.log" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.550861 4932 generic.go:334] "Generic (PLEG): container finished" podID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerID="e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4" exitCode=1 Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.550915 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerDied","Data":"e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4"} Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.550982 4932 scope.go:117] "RemoveContainer" containerID="ef1025273d1a52fcc05cc010942139a00bd9ac7b3adbd346e088c7feb903d009" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.552109 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.552149 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.552180 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.552243 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.552267 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:32Z","lastTransitionTime":"2026-02-18T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.555670 4932 scope.go:117] "RemoveContainer" containerID="e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4" Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.556047 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.578493 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.601330 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.622965 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.644349 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.655860 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.655934 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.655957 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.656007 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.656025 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:32Z","lastTransitionTime":"2026-02-18T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.680490 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef1025273d1a52fcc05cc010942139a00bd9ac7b3adbd346e088c7feb903d009\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:29Z\\\",\\\"message\\\":\\\"o/informers/factory.go:160\\\\nI0218 19:34:29.827366 6197 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:34:29.827859 6197 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:34:29.827926 6197 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:34:29.827978 6197 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:34:29.828543 6197 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 19:34:29.828558 6197 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 19:34:29.828579 6197 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:34:29.828594 6197 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:34:29.828604 6197 factory.go:656] Stopping watch factory\\\\nI0218 19:34:29.828615 6197 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 19:34:29.828628 6197 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:31Z\\\",\\\"message\\\":\\\"31.580806 6358 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 19:34:31.580833 6358 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:34:31.580855 6358 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:34:31.580866 6358 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:34:31.580871 6358 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:34:31.580921 6358 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:34:31.580946 6358 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:34:31.580904 6358 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:34:31.581327 6358 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:34:31.581392 6358 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:34:31.581432 6358 factory.go:656] Stopping watch factory\\\\nI0218 19:34:31.581456 6358 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:34:31.581506 6358 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:34:31.581541 6358 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0218 19:34:31.581561 6358 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nF0218 19:34:31.581772 6358 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.700907 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.722048 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.743792 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.758576 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.758617 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.758626 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.758641 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.758653 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:32Z","lastTransitionTime":"2026-02-18T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.760270 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.768365 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.768433 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.768456 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.768487 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.768509 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:32Z","lastTransitionTime":"2026-02-18T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.789679 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.814064 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.827932 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.827967 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.827992 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.828009 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.828132 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.828134 4932 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.828133 4932 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.828149 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.828256 4932 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.828234 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:48.828219368 +0000 UTC m=+52.410174213 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.828285 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.828356 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.828371 4932 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.828315 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:48.82829787 +0000 UTC m=+52.410252735 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.828465 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:48.828444803 +0000 UTC m=+52.410399708 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.828485 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:48.828477264 +0000 UTC m=+52.410432209 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.830424 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.834282 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.834324 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.834344 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.834361 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.834373 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:32Z","lastTransitionTime":"2026-02-18T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.842071 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.858559 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.861255 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.864807 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.864849 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.864860 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.864876 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.864889 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:32Z","lastTransitionTime":"2026-02-18T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.871958 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.878522 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.881613 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.881643 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.881652 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.881664 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.881674 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:32Z","lastTransitionTime":"2026-02-18T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.891437 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.894643 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.894679 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.894758 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.894777 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.894787 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:32Z","lastTransitionTime":"2026-02-18T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.928345 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.928501 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:34:48.928487468 +0000 UTC m=+52.510442313 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.936624 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:32Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:32 crc kubenswrapper[4932]: E0218 19:34:32.936761 4932 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.938217 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.938249 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.938259 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.938272 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:32 crc kubenswrapper[4932]: I0218 19:34:32.938282 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:32Z","lastTransitionTime":"2026-02-18T19:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.040782 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.040842 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.040862 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.040891 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.040914 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:33Z","lastTransitionTime":"2026-02-18T19:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.132732 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 18:50:20.475405254 +0000 UTC Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.144081 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.144143 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.144164 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.144227 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.144251 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:33Z","lastTransitionTime":"2026-02-18T19:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.179297 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.179371 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:33 crc kubenswrapper[4932]: E0218 19:34:33.179505 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:33 crc kubenswrapper[4932]: E0218 19:34:33.179986 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.246801 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.246848 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.246858 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.246876 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.246891 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:33Z","lastTransitionTime":"2026-02-18T19:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.349676 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.349738 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.349756 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.349779 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.349797 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:33Z","lastTransitionTime":"2026-02-18T19:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.453352 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.453417 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.453437 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.453463 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.453481 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:33Z","lastTransitionTime":"2026-02-18T19:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.555885 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.555982 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.556001 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.556023 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.556042 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:33Z","lastTransitionTime":"2026-02-18T19:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.560936 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovnkube-controller/1.log" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.660098 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.660218 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.660246 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.660276 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.660299 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:33Z","lastTransitionTime":"2026-02-18T19:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.763059 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.763480 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.763652 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.763821 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.763953 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:33Z","lastTransitionTime":"2026-02-18T19:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.867622 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.868013 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.868234 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.868376 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.868503 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:33Z","lastTransitionTime":"2026-02-18T19:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.938346 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj"] Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.939082 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" Feb 18 19:34:33 crc kubenswrapper[4932]: W0218 19:34:33.942301 4932 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert": failed to list *v1.Secret: secrets "ovn-control-plane-metrics-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Feb 18 19:34:33 crc kubenswrapper[4932]: W0218 19:34:33.945215 4932 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd": failed to list *v1.Secret: secrets "ovn-kubernetes-control-plane-dockercfg-gs7dd" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Feb 18 19:34:33 crc kubenswrapper[4932]: E0218 19:34:33.948558 4932 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-gs7dd\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-kubernetes-control-plane-dockercfg-gs7dd\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:34:33 crc kubenswrapper[4932]: E0218 19:34:33.945322 4932 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-control-plane-metrics-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.972010 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.972086 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.972111 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.972143 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.972168 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:33Z","lastTransitionTime":"2026-02-18T19:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:33 crc kubenswrapper[4932]: I0218 19:34:33.979910 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef1025273d1a52fcc05cc010942139a00bd9ac7b3adbd346e088c7feb903d009\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:29Z\\\",\\\"message\\\":\\\"o/informers/factory.go:160\\\\nI0218 19:34:29.827366 6197 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:34:29.827859 6197 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:34:29.827926 6197 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:34:29.827978 6197 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:34:29.828543 6197 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 19:34:29.828558 6197 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 19:34:29.828579 6197 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:34:29.828594 6197 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:34:29.828604 6197 factory.go:656] Stopping watch factory\\\\nI0218 19:34:29.828615 6197 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 19:34:29.828628 6197 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:31Z\\\",\\\"message\\\":\\\"31.580806 6358 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 19:34:31.580833 6358 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:34:31.580855 6358 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:34:31.580866 6358 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:34:31.580871 6358 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:34:31.580921 6358 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:34:31.580946 6358 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:34:31.580904 6358 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:34:31.581327 6358 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:34:31.581392 6358 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:34:31.581432 6358 factory.go:656] Stopping watch factory\\\\nI0218 19:34:31.581456 6358 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:34:31.581506 6358 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:34:31.581541 6358 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0218 19:34:31.581561 6358 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nF0218 19:34:31.581772 6358 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.000989 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:33Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.023087 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.041327 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.044222 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8llv\" (UniqueName: \"kubernetes.io/projected/64edee2c-efed-415d-8d8e-362edad7c5bb-kube-api-access-b8llv\") pod \"ovnkube-control-plane-749d76644c-bzfpj\" (UID: \"64edee2c-efed-415d-8d8e-362edad7c5bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.044348 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/64edee2c-efed-415d-8d8e-362edad7c5bb-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bzfpj\" (UID: \"64edee2c-efed-415d-8d8e-362edad7c5bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.044434 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/64edee2c-efed-415d-8d8e-362edad7c5bb-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bzfpj\" (UID: \"64edee2c-efed-415d-8d8e-362edad7c5bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.044482 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/64edee2c-efed-415d-8d8e-362edad7c5bb-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bzfpj\" (UID: \"64edee2c-efed-415d-8d8e-362edad7c5bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.065090 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.075297 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.075360 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.075381 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.075406 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.075427 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:34Z","lastTransitionTime":"2026-02-18T19:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.087086 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.106957 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.130042 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.133291 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 16:02:54.99425301 +0000 UTC Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.145517 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8llv\" (UniqueName: \"kubernetes.io/projected/64edee2c-efed-415d-8d8e-362edad7c5bb-kube-api-access-b8llv\") pod \"ovnkube-control-plane-749d76644c-bzfpj\" (UID: \"64edee2c-efed-415d-8d8e-362edad7c5bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.145601 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/64edee2c-efed-415d-8d8e-362edad7c5bb-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bzfpj\" (UID: \"64edee2c-efed-415d-8d8e-362edad7c5bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.145705 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/64edee2c-efed-415d-8d8e-362edad7c5bb-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bzfpj\" (UID: \"64edee2c-efed-415d-8d8e-362edad7c5bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.145755 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/64edee2c-efed-415d-8d8e-362edad7c5bb-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bzfpj\" (UID: \"64edee2c-efed-415d-8d8e-362edad7c5bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.147072 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/64edee2c-efed-415d-8d8e-362edad7c5bb-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bzfpj\" (UID: \"64edee2c-efed-415d-8d8e-362edad7c5bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.147165 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/64edee2c-efed-415d-8d8e-362edad7c5bb-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bzfpj\" (UID: \"64edee2c-efed-415d-8d8e-362edad7c5bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.154240 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.178266 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:34 crc kubenswrapper[4932]: E0218 19:34:34.178486 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.179148 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.179210 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.179231 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.179258 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.179282 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:34Z","lastTransitionTime":"2026-02-18T19:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.180653 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.189453 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8llv\" (UniqueName: \"kubernetes.io/projected/64edee2c-efed-415d-8d8e-362edad7c5bb-kube-api-access-b8llv\") pod \"ovnkube-control-plane-749d76644c-bzfpj\" (UID: \"64edee2c-efed-415d-8d8e-362edad7c5bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.205891 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.228275 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.249378 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.270643 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.283050 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.283118 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.283140 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.283171 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.283231 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:34Z","lastTransitionTime":"2026-02-18T19:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.293270 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.386680 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.386784 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.386805 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.386831 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.386849 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:34Z","lastTransitionTime":"2026-02-18T19:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.491151 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.491246 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.491264 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.491292 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.491310 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:34Z","lastTransitionTime":"2026-02-18T19:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.594321 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.594390 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.594410 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.594440 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.594459 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:34Z","lastTransitionTime":"2026-02-18T19:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.697504 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.697582 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.697602 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.697632 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.697650 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:34Z","lastTransitionTime":"2026-02-18T19:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.801303 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.801362 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.801384 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.801415 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.801439 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:34Z","lastTransitionTime":"2026-02-18T19:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.905513 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.905610 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.905632 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.905665 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:34 crc kubenswrapper[4932]: I0218 19:34:34.905685 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:34Z","lastTransitionTime":"2026-02-18T19:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.009883 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.009961 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.009979 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.010009 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.010033 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:35Z","lastTransitionTime":"2026-02-18T19:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.116552 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-kdjbt"] Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.117353 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.117463 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.117484 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.117514 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.117534 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:35Z","lastTransitionTime":"2026-02-18T19:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.117635 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:35 crc kubenswrapper[4932]: E0218 19:34:35.117757 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.133744 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 08:59:20.118302823 +0000 UTC Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.141468 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: E0218 19:34:35.146549 4932 secret.go:188] Couldn't get secret openshift-ovn-kubernetes/ovn-control-plane-metrics-cert: failed to sync secret cache: timed out waiting for the condition Feb 18 19:34:35 crc kubenswrapper[4932]: E0218 19:34:35.146649 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64edee2c-efed-415d-8d8e-362edad7c5bb-ovn-control-plane-metrics-cert podName:64edee2c-efed-415d-8d8e-362edad7c5bb nodeName:}" failed. No retries permitted until 2026-02-18 19:34:35.646621288 +0000 UTC m=+39.228576163 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovn-control-plane-metrics-cert" (UniqueName: "kubernetes.io/secret/64edee2c-efed-415d-8d8e-362edad7c5bb-ovn-control-plane-metrics-cert") pod "ovnkube-control-plane-749d76644c-bzfpj" (UID: "64edee2c-efed-415d-8d8e-362edad7c5bb") : failed to sync secret cache: timed out waiting for the condition Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.164268 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.179210 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.179299 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:35 crc kubenswrapper[4932]: E0218 19:34:35.179372 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:35 crc kubenswrapper[4932]: E0218 19:34:35.179493 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.179976 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.201151 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.217810 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.220436 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.220467 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.220483 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.220502 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.220518 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:35Z","lastTransitionTime":"2026-02-18T19:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.235142 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.258189 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs\") pod \"network-metrics-daemon-kdjbt\" (UID: \"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\") " pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.258276 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r9kj\" (UniqueName: \"kubernetes.io/projected/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-kube-api-access-2r9kj\") pod \"network-metrics-daemon-kdjbt\" (UID: \"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\") " pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.260465 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.280570 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.305099 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.326371 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.326439 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.326462 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.326493 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.326517 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:35Z","lastTransitionTime":"2026-02-18T19:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.327402 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.352724 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.359549 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r9kj\" (UniqueName: \"kubernetes.io/projected/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-kube-api-access-2r9kj\") pod \"network-metrics-daemon-kdjbt\" (UID: \"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\") " pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.359739 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs\") pod \"network-metrics-daemon-kdjbt\" (UID: \"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\") " pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:35 crc kubenswrapper[4932]: E0218 19:34:35.359932 4932 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:34:35 crc kubenswrapper[4932]: E0218 19:34:35.360020 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs podName:1d73072e-7e9b-4ae7-92ca-5950da33ed6c nodeName:}" failed. No retries permitted until 2026-02-18 19:34:35.859998639 +0000 UTC m=+39.441953514 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs") pod "network-metrics-daemon-kdjbt" (UID: "1d73072e-7e9b-4ae7-92ca-5950da33ed6c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.376692 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.377054 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.387760 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.391059 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r9kj\" (UniqueName: \"kubernetes.io/projected/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-kube-api-access-2r9kj\") pod \"network-metrics-daemon-kdjbt\" (UID: \"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\") " pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.396472 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.422601 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.431267 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.431346 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.431366 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.431401 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.431431 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:35Z","lastTransitionTime":"2026-02-18T19:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.453109 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef1025273d1a52fcc05cc010942139a00bd9ac7b3adbd346e088c7feb903d009\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:29Z\\\",\\\"message\\\":\\\"o/informers/factory.go:160\\\\nI0218 19:34:29.827366 6197 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:34:29.827859 6197 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:34:29.827926 6197 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:34:29.827978 6197 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:34:29.828543 6197 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 19:34:29.828558 6197 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 19:34:29.828579 6197 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:34:29.828594 6197 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:34:29.828604 6197 factory.go:656] Stopping watch factory\\\\nI0218 19:34:29.828615 6197 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 19:34:29.828628 6197 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:31Z\\\",\\\"message\\\":\\\"31.580806 6358 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 19:34:31.580833 6358 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:34:31.580855 6358 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:34:31.580866 6358 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:34:31.580871 6358 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:34:31.580921 6358 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:34:31.580946 6358 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:34:31.580904 6358 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:34:31.581327 6358 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:34:31.581392 6358 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:34:31.581432 6358 factory.go:656] Stopping watch factory\\\\nI0218 19:34:31.581456 6358 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:34:31.581506 6358 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:34:31.581541 6358 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0218 19:34:31.581561 6358 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nF0218 19:34:31.581772 6358 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.473724 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.539373 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.539429 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.539443 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.539473 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.539496 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:35Z","lastTransitionTime":"2026-02-18T19:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.643504 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.643556 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.643573 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.643595 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.643613 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:35Z","lastTransitionTime":"2026-02-18T19:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.663038 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/64edee2c-efed-415d-8d8e-362edad7c5bb-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bzfpj\" (UID: \"64edee2c-efed-415d-8d8e-362edad7c5bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.668858 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/64edee2c-efed-415d-8d8e-362edad7c5bb-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bzfpj\" (UID: \"64edee2c-efed-415d-8d8e-362edad7c5bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.746733 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.747312 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.747334 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.747359 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.747375 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:35Z","lastTransitionTime":"2026-02-18T19:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.765350 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" Feb 18 19:34:35 crc kubenswrapper[4932]: W0218 19:34:35.787129 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64edee2c_efed_415d_8d8e_362edad7c5bb.slice/crio-fdd19e859e2aa8872019dc996316c4a62a9a2b4d299e6cd9167c5c3b88b2ae9f WatchSource:0}: Error finding container fdd19e859e2aa8872019dc996316c4a62a9a2b4d299e6cd9167c5c3b88b2ae9f: Status 404 returned error can't find the container with id fdd19e859e2aa8872019dc996316c4a62a9a2b4d299e6cd9167c5c3b88b2ae9f Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.853971 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.854018 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.854030 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.854048 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.854058 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:35Z","lastTransitionTime":"2026-02-18T19:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.865653 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs\") pod \"network-metrics-daemon-kdjbt\" (UID: \"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\") " pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:35 crc kubenswrapper[4932]: E0218 19:34:35.865906 4932 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:34:35 crc kubenswrapper[4932]: E0218 19:34:35.866009 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs podName:1d73072e-7e9b-4ae7-92ca-5950da33ed6c nodeName:}" failed. No retries permitted until 2026-02-18 19:34:36.865984818 +0000 UTC m=+40.447939703 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs") pod "network-metrics-daemon-kdjbt" (UID: "1d73072e-7e9b-4ae7-92ca-5950da33ed6c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.901052 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.901897 4932 scope.go:117] "RemoveContainer" containerID="e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4" Feb 18 19:34:35 crc kubenswrapper[4932]: E0218 19:34:35.902070 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.918030 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.931505 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.948130 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.958379 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.958412 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.958419 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.958448 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.958457 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:35Z","lastTransitionTime":"2026-02-18T19:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.962067 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.974887 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.986550 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:35 crc kubenswrapper[4932]: I0218 19:34:35.997041 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.013056 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.025768 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.039260 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.054987 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.061237 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.061266 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.061276 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.061290 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.061300 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:36Z","lastTransitionTime":"2026-02-18T19:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.067869 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.086238 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.108265 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.125753 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:31Z\\\",\\\"message\\\":\\\"31.580806 6358 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 19:34:31.580833 6358 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:34:31.580855 6358 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:34:31.580866 6358 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:34:31.580871 6358 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:34:31.580921 6358 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:34:31.580946 6358 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:34:31.580904 6358 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:34:31.581327 6358 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:34:31.581392 6358 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:34:31.581432 6358 factory.go:656] Stopping watch factory\\\\nI0218 19:34:31.581456 6358 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:34:31.581506 6358 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:34:31.581541 6358 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0218 19:34:31.581561 6358 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nF0218 19:34:31.581772 6358 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.134859 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 23:32:48.691953197 +0000 UTC Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.138345 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.164569 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.164629 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.164645 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.164669 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.164685 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:36Z","lastTransitionTime":"2026-02-18T19:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.179228 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:36 crc kubenswrapper[4932]: E0218 19:34:36.179403 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.266715 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.266756 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.266765 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.266780 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.266790 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:36Z","lastTransitionTime":"2026-02-18T19:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.370808 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.370888 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.370912 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.370943 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.370963 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:36Z","lastTransitionTime":"2026-02-18T19:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.473621 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.473682 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.473700 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.473724 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.473744 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:36Z","lastTransitionTime":"2026-02-18T19:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.577231 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.577292 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.577311 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.577517 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.577539 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:36Z","lastTransitionTime":"2026-02-18T19:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.582123 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" event={"ID":"64edee2c-efed-415d-8d8e-362edad7c5bb","Type":"ContainerStarted","Data":"76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9"} Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.582232 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" event={"ID":"64edee2c-efed-415d-8d8e-362edad7c5bb","Type":"ContainerStarted","Data":"594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979"} Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.582256 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" event={"ID":"64edee2c-efed-415d-8d8e-362edad7c5bb","Type":"ContainerStarted","Data":"fdd19e859e2aa8872019dc996316c4a62a9a2b4d299e6cd9167c5c3b88b2ae9f"} Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.604955 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.631486 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.650599 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.673632 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.681753 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.681825 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.681842 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.681870 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.681889 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:36Z","lastTransitionTime":"2026-02-18T19:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.698490 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.716762 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.741364 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.761102 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.781744 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.785071 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.785210 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.785252 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.785287 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.785306 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:36Z","lastTransitionTime":"2026-02-18T19:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.816888 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:31Z\\\",\\\"message\\\":\\\"31.580806 6358 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 19:34:31.580833 6358 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:34:31.580855 6358 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:34:31.580866 6358 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:34:31.580871 6358 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:34:31.580921 6358 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:34:31.580946 6358 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:34:31.580904 6358 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:34:31.581327 6358 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:34:31.581392 6358 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:34:31.581432 6358 factory.go:656] Stopping watch factory\\\\nI0218 19:34:31.581456 6358 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:34:31.581506 6358 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:34:31.581541 6358 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0218 19:34:31.581561 6358 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nF0218 19:34:31.581772 6358 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.836967 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.859120 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: E0218 19:34:36.878525 4932 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.878870 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs\") pod \"network-metrics-daemon-kdjbt\" (UID: \"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\") " pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:36 crc kubenswrapper[4932]: E0218 19:34:36.879085 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs podName:1d73072e-7e9b-4ae7-92ca-5950da33ed6c nodeName:}" failed. No retries permitted until 2026-02-18 19:34:38.879046572 +0000 UTC m=+42.461001447 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs") pod "network-metrics-daemon-kdjbt" (UID: "1d73072e-7e9b-4ae7-92ca-5950da33ed6c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.885710 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.890442 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.890506 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.890525 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.890551 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.890571 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:36Z","lastTransitionTime":"2026-02-18T19:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.905828 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.929262 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.957242 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.994006 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.994075 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.994093 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.994120 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:36 crc kubenswrapper[4932]: I0218 19:34:36.994140 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:36Z","lastTransitionTime":"2026-02-18T19:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.096748 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.096809 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.096828 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.096854 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.096872 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:37Z","lastTransitionTime":"2026-02-18T19:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.135543 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 19:30:52.344795943 +0000 UTC Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.178387 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:37 crc kubenswrapper[4932]: E0218 19:34:37.179495 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.179545 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.179620 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:37 crc kubenswrapper[4932]: E0218 19:34:37.179802 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:37 crc kubenswrapper[4932]: E0218 19:34:37.179990 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.193981 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.199540 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.199575 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.199609 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.199625 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.199634 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:37Z","lastTransitionTime":"2026-02-18T19:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.206484 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.221669 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.247245 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.271499 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.302079 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.302969 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.303013 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.303031 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.303053 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.303072 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:37Z","lastTransitionTime":"2026-02-18T19:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.323570 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.343269 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.370939 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.388031 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.405304 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.405356 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.405375 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.405399 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.405418 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:37Z","lastTransitionTime":"2026-02-18T19:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.416351 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:31Z\\\",\\\"message\\\":\\\"31.580806 6358 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 19:34:31.580833 6358 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:34:31.580855 6358 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:34:31.580866 6358 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:34:31.580871 6358 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:34:31.580921 6358 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:34:31.580946 6358 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:34:31.580904 6358 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:34:31.581327 6358 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:34:31.581392 6358 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:34:31.581432 6358 factory.go:656] Stopping watch factory\\\\nI0218 19:34:31.581456 6358 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:34:31.581506 6358 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:34:31.581541 6358 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0218 19:34:31.581561 6358 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nF0218 19:34:31.581772 6358 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.432928 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.448400 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.467526 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.483087 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.495293 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.508751 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.508811 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.508822 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.508842 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.508854 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:37Z","lastTransitionTime":"2026-02-18T19:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.611403 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.611552 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.611572 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.611641 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.611662 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:37Z","lastTransitionTime":"2026-02-18T19:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.715284 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.715379 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.715400 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.715426 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.715445 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:37Z","lastTransitionTime":"2026-02-18T19:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.818590 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.818673 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.818697 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.818731 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.818755 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:37Z","lastTransitionTime":"2026-02-18T19:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.922368 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.922416 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.922432 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.922449 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:37 crc kubenswrapper[4932]: I0218 19:34:37.922460 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:37Z","lastTransitionTime":"2026-02-18T19:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.025799 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.025846 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.025861 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.025880 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.025896 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:38Z","lastTransitionTime":"2026-02-18T19:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.129403 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.129478 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.129496 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.129520 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.129538 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:38Z","lastTransitionTime":"2026-02-18T19:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.136409 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 20:03:26.615837202 +0000 UTC Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.179109 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:38 crc kubenswrapper[4932]: E0218 19:34:38.179364 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.233674 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.233747 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.233771 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.233806 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.233834 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:38Z","lastTransitionTime":"2026-02-18T19:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.338624 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.338714 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.338732 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.338758 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.338779 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:38Z","lastTransitionTime":"2026-02-18T19:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.442968 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.443047 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.443067 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.443097 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.443123 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:38Z","lastTransitionTime":"2026-02-18T19:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.546585 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.546635 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.546647 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.546666 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.546680 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:38Z","lastTransitionTime":"2026-02-18T19:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.650852 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.650908 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.650925 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.650949 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.650969 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:38Z","lastTransitionTime":"2026-02-18T19:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.754532 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.754589 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.754605 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.754630 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.754678 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:38Z","lastTransitionTime":"2026-02-18T19:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.858268 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.858331 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.858353 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.858383 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.858407 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:38Z","lastTransitionTime":"2026-02-18T19:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.899125 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs\") pod \"network-metrics-daemon-kdjbt\" (UID: \"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\") " pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:38 crc kubenswrapper[4932]: E0218 19:34:38.899485 4932 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:34:38 crc kubenswrapper[4932]: E0218 19:34:38.899594 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs podName:1d73072e-7e9b-4ae7-92ca-5950da33ed6c nodeName:}" failed. No retries permitted until 2026-02-18 19:34:42.899571407 +0000 UTC m=+46.481526292 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs") pod "network-metrics-daemon-kdjbt" (UID: "1d73072e-7e9b-4ae7-92ca-5950da33ed6c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.962679 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.962762 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.962781 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.963247 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:38 crc kubenswrapper[4932]: I0218 19:34:38.963305 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:38Z","lastTransitionTime":"2026-02-18T19:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.067444 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.067527 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.067547 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.067577 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.067598 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:39Z","lastTransitionTime":"2026-02-18T19:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.137018 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 06:20:10.60753206 +0000 UTC Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.171078 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.171140 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.171160 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.171235 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.171262 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:39Z","lastTransitionTime":"2026-02-18T19:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.178891 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.178961 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.179016 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:39 crc kubenswrapper[4932]: E0218 19:34:39.179255 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:34:39 crc kubenswrapper[4932]: E0218 19:34:39.179407 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:39 crc kubenswrapper[4932]: E0218 19:34:39.179681 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.274072 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.274132 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.274151 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.274217 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.274239 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:39Z","lastTransitionTime":"2026-02-18T19:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.377707 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.377833 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.377856 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.377886 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.377908 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:39Z","lastTransitionTime":"2026-02-18T19:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.481762 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.481824 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.481842 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.481866 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.481886 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:39Z","lastTransitionTime":"2026-02-18T19:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.585157 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.585343 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.585376 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.585415 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.585444 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:39Z","lastTransitionTime":"2026-02-18T19:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.688965 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.689033 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.689052 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.689081 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.689100 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:39Z","lastTransitionTime":"2026-02-18T19:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.792974 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.793445 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.793667 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.793963 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.794243 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:39Z","lastTransitionTime":"2026-02-18T19:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.898080 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.898137 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.898149 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.898193 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:39 crc kubenswrapper[4932]: I0218 19:34:39.898207 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:39Z","lastTransitionTime":"2026-02-18T19:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.001487 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.001947 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.002261 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.002564 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.002810 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:40Z","lastTransitionTime":"2026-02-18T19:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.106445 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.106922 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.107105 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.107325 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.107516 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:40Z","lastTransitionTime":"2026-02-18T19:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.137589 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 18:28:40.753112565 +0000 UTC Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.179229 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:40 crc kubenswrapper[4932]: E0218 19:34:40.179493 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.211202 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.211284 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.211311 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.211338 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.211356 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:40Z","lastTransitionTime":"2026-02-18T19:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.314657 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.314723 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.314741 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.314767 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.314787 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:40Z","lastTransitionTime":"2026-02-18T19:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.418058 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.418112 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.418130 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.418155 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.418201 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:40Z","lastTransitionTime":"2026-02-18T19:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.522701 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.522767 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.522785 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.522813 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.522831 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:40Z","lastTransitionTime":"2026-02-18T19:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.626042 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.626124 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.626145 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.626705 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.626768 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:40Z","lastTransitionTime":"2026-02-18T19:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.729383 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.729428 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.729445 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.729466 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.729483 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:40Z","lastTransitionTime":"2026-02-18T19:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.831739 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.831796 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.831814 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.831838 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.831855 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:40Z","lastTransitionTime":"2026-02-18T19:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.934937 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.935008 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.935060 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.935092 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:40 crc kubenswrapper[4932]: I0218 19:34:40.935116 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:40Z","lastTransitionTime":"2026-02-18T19:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.037820 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.037858 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.037869 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.037887 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.037901 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:41Z","lastTransitionTime":"2026-02-18T19:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.138951 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 08:28:28.274894939 +0000 UTC Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.140936 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.141060 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.141147 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.141305 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.141431 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:41Z","lastTransitionTime":"2026-02-18T19:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.178337 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.178542 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.178704 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:41 crc kubenswrapper[4932]: E0218 19:34:41.179402 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:41 crc kubenswrapper[4932]: E0218 19:34:41.179484 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:41 crc kubenswrapper[4932]: E0218 19:34:41.179642 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.244722 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.244834 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.244882 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.244899 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.244911 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:41Z","lastTransitionTime":"2026-02-18T19:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.347458 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.347524 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.347543 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.347568 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.347590 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:41Z","lastTransitionTime":"2026-02-18T19:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.450545 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.450637 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.450651 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.450681 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.450692 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:41Z","lastTransitionTime":"2026-02-18T19:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.554001 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.554087 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.554113 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.554131 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.554141 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:41Z","lastTransitionTime":"2026-02-18T19:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.658093 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.658163 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.658212 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.658242 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.658261 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:41Z","lastTransitionTime":"2026-02-18T19:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.761994 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.762071 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.762096 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.762130 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.762154 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:41Z","lastTransitionTime":"2026-02-18T19:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.865307 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.865374 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.865396 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.865428 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.865461 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:41Z","lastTransitionTime":"2026-02-18T19:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.968431 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.968496 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.968515 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.968539 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:41 crc kubenswrapper[4932]: I0218 19:34:41.968556 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:41Z","lastTransitionTime":"2026-02-18T19:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.071569 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.071630 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.071646 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.071669 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.071685 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:42Z","lastTransitionTime":"2026-02-18T19:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.139035 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 18:17:42.525747333 +0000 UTC Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.174318 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.174364 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.174376 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.174393 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.174406 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:42Z","lastTransitionTime":"2026-02-18T19:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.178749 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:42 crc kubenswrapper[4932]: E0218 19:34:42.178860 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.277796 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.277865 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.277889 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.277916 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.277938 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:42Z","lastTransitionTime":"2026-02-18T19:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.381236 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.381307 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.381327 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.381357 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.381380 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:42Z","lastTransitionTime":"2026-02-18T19:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.484890 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.485245 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.485282 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.485318 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.485343 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:42Z","lastTransitionTime":"2026-02-18T19:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.587781 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.587834 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.587849 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.587868 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.587880 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:42Z","lastTransitionTime":"2026-02-18T19:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.691751 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.691815 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.691833 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.691861 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.691878 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:42Z","lastTransitionTime":"2026-02-18T19:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.794662 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.794730 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.794750 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.794777 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.794798 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:42Z","lastTransitionTime":"2026-02-18T19:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.897252 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.897320 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.897344 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.897373 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.897400 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:42Z","lastTransitionTime":"2026-02-18T19:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.946133 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs\") pod \"network-metrics-daemon-kdjbt\" (UID: \"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\") " pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:42 crc kubenswrapper[4932]: E0218 19:34:42.946420 4932 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:34:42 crc kubenswrapper[4932]: E0218 19:34:42.946583 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs podName:1d73072e-7e9b-4ae7-92ca-5950da33ed6c nodeName:}" failed. No retries permitted until 2026-02-18 19:34:50.946528775 +0000 UTC m=+54.528483660 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs") pod "network-metrics-daemon-kdjbt" (UID: "1d73072e-7e9b-4ae7-92ca-5950da33ed6c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.977327 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.977394 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.977410 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.977433 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:42 crc kubenswrapper[4932]: I0218 19:34:42.977450 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:42Z","lastTransitionTime":"2026-02-18T19:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:43 crc kubenswrapper[4932]: E0218 19:34:43.001929 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:42Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.008236 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.008299 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.008315 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.008344 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.008363 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:43Z","lastTransitionTime":"2026-02-18T19:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:43 crc kubenswrapper[4932]: E0218 19:34:43.029685 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.035755 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.035806 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.035823 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.035847 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.035868 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:43Z","lastTransitionTime":"2026-02-18T19:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:43 crc kubenswrapper[4932]: E0218 19:34:43.057379 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.063904 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.063952 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.063970 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.063995 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.064013 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:43Z","lastTransitionTime":"2026-02-18T19:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:43 crc kubenswrapper[4932]: E0218 19:34:43.084280 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.089535 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.089614 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.089632 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.089658 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.089677 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:43Z","lastTransitionTime":"2026-02-18T19:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:43 crc kubenswrapper[4932]: E0218 19:34:43.111578 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:43 crc kubenswrapper[4932]: E0218 19:34:43.111904 4932 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.114565 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.114612 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.114628 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.114651 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.114670 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:43Z","lastTransitionTime":"2026-02-18T19:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.140000 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 19:55:27.105537769 +0000 UTC Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.179106 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.179258 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.179299 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:43 crc kubenswrapper[4932]: E0218 19:34:43.179870 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:43 crc kubenswrapper[4932]: E0218 19:34:43.179960 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:43 crc kubenswrapper[4932]: E0218 19:34:43.179998 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.223691 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.223785 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.223803 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.223827 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.223845 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:43Z","lastTransitionTime":"2026-02-18T19:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.327167 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.327264 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.327281 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.327308 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.327327 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:43Z","lastTransitionTime":"2026-02-18T19:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.430844 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.430931 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.430956 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.430985 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.431012 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:43Z","lastTransitionTime":"2026-02-18T19:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.534738 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.534782 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.534796 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.534814 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.534826 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:43Z","lastTransitionTime":"2026-02-18T19:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.638164 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.638261 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.638281 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.638305 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.638322 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:43Z","lastTransitionTime":"2026-02-18T19:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.741726 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.742155 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.742361 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.742513 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.742660 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:43Z","lastTransitionTime":"2026-02-18T19:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.845633 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.845734 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.845755 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.845781 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.845801 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:43Z","lastTransitionTime":"2026-02-18T19:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.949087 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.949512 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.949774 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.950024 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:43 crc kubenswrapper[4932]: I0218 19:34:43.950275 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:43Z","lastTransitionTime":"2026-02-18T19:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.053958 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.054428 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.054658 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.054862 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.055080 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:44Z","lastTransitionTime":"2026-02-18T19:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.141203 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 03:39:58.484594016 +0000 UTC Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.157745 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.157972 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.158204 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.158435 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.158668 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:44Z","lastTransitionTime":"2026-02-18T19:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.178218 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:44 crc kubenswrapper[4932]: E0218 19:34:44.178689 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.262454 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.262916 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.263113 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.263378 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.263607 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:44Z","lastTransitionTime":"2026-02-18T19:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.367270 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.367665 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.367821 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.367962 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.368091 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:44Z","lastTransitionTime":"2026-02-18T19:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.470853 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.470927 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.470950 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.470977 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.470996 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:44Z","lastTransitionTime":"2026-02-18T19:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.574684 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.574737 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.574756 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.574781 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.574800 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:44Z","lastTransitionTime":"2026-02-18T19:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.678127 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.678237 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.678255 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.678284 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.678304 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:44Z","lastTransitionTime":"2026-02-18T19:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.780827 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.780889 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.780907 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.780929 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.780947 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:44Z","lastTransitionTime":"2026-02-18T19:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.884148 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.884248 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.884272 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.884307 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.884329 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:44Z","lastTransitionTime":"2026-02-18T19:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.986844 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.986905 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.986922 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.986949 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:44 crc kubenswrapper[4932]: I0218 19:34:44.986969 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:44Z","lastTransitionTime":"2026-02-18T19:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.090321 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.090738 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.090885 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.091025 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.091264 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:45Z","lastTransitionTime":"2026-02-18T19:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.142290 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 16:50:35.278569312 +0000 UTC Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.178503 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.178577 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.178763 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:45 crc kubenswrapper[4932]: E0218 19:34:45.178943 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:45 crc kubenswrapper[4932]: E0218 19:34:45.179107 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:34:45 crc kubenswrapper[4932]: E0218 19:34:45.179451 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.194364 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.194414 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.194431 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.194454 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.194472 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:45Z","lastTransitionTime":"2026-02-18T19:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.298348 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.298413 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.298434 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.298462 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.298483 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:45Z","lastTransitionTime":"2026-02-18T19:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.401430 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.401501 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.401520 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.401544 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.401565 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:45Z","lastTransitionTime":"2026-02-18T19:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.504718 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.504767 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.504784 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.504806 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.504821 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:45Z","lastTransitionTime":"2026-02-18T19:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.607398 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.607483 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.607509 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.608139 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.608241 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:45Z","lastTransitionTime":"2026-02-18T19:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.745931 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.746031 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.746050 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.746076 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.746096 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:45Z","lastTransitionTime":"2026-02-18T19:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.849228 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.849286 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.849304 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.849328 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.849348 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:45Z","lastTransitionTime":"2026-02-18T19:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.951774 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.951881 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.951899 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.951923 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:45 crc kubenswrapper[4932]: I0218 19:34:45.951942 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:45Z","lastTransitionTime":"2026-02-18T19:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.054573 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.054653 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.054671 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.054701 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.054719 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:46Z","lastTransitionTime":"2026-02-18T19:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.143830 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 07:38:03.793456446 +0000 UTC Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.157253 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.157320 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.157340 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.157366 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.157384 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:46Z","lastTransitionTime":"2026-02-18T19:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.178825 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:46 crc kubenswrapper[4932]: E0218 19:34:46.179002 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.260691 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.260747 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.260764 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.260788 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.260806 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:46Z","lastTransitionTime":"2026-02-18T19:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.364210 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.364345 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.364381 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.364451 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.364474 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:46Z","lastTransitionTime":"2026-02-18T19:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.467037 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.467137 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.467166 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.467255 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.467276 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:46Z","lastTransitionTime":"2026-02-18T19:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.573676 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.573819 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.573902 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.573979 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.574006 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:46Z","lastTransitionTime":"2026-02-18T19:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.677416 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.677461 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.677479 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.677504 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.677524 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:46Z","lastTransitionTime":"2026-02-18T19:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.779898 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.779969 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.779994 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.780021 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.780043 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:46Z","lastTransitionTime":"2026-02-18T19:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.883355 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.883412 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.883433 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.883460 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.883483 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:46Z","lastTransitionTime":"2026-02-18T19:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.986757 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.986797 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.986809 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.986827 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:46 crc kubenswrapper[4932]: I0218 19:34:46.986840 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:46Z","lastTransitionTime":"2026-02-18T19:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.089808 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.089872 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.089898 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.089928 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.089949 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:47Z","lastTransitionTime":"2026-02-18T19:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.154441 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 22:06:36.297914774 +0000 UTC Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.178993 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.179122 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.179249 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:47 crc kubenswrapper[4932]: E0218 19:34:47.179371 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:34:47 crc kubenswrapper[4932]: E0218 19:34:47.179150 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:47 crc kubenswrapper[4932]: E0218 19:34:47.179702 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.191889 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.191940 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.191952 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.191973 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.191985 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:47Z","lastTransitionTime":"2026-02-18T19:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.202416 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.218100 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.240280 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:31Z\\\",\\\"message\\\":\\\"31.580806 6358 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 19:34:31.580833 6358 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:34:31.580855 6358 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:34:31.580866 6358 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:34:31.580871 6358 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:34:31.580921 6358 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:34:31.580946 6358 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:34:31.580904 6358 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:34:31.581327 6358 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:34:31.581392 6358 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:34:31.581432 6358 factory.go:656] Stopping watch factory\\\\nI0218 19:34:31.581456 6358 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:34:31.581506 6358 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:34:31.581541 6358 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0218 19:34:31.581561 6358 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nF0218 19:34:31.581772 6358 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.251703 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.275902 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.292026 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.294302 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.294372 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.294396 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.294426 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.294447 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:47Z","lastTransitionTime":"2026-02-18T19:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.310160 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.330374 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.346368 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.368726 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.385479 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.396844 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.396944 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.396970 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.397004 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.397027 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:47Z","lastTransitionTime":"2026-02-18T19:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.412058 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.429725 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.448686 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.466210 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.481828 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.500559 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.500611 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.500623 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.500644 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.500657 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:47Z","lastTransitionTime":"2026-02-18T19:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.603941 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.603989 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.604004 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.604028 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.604045 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:47Z","lastTransitionTime":"2026-02-18T19:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.706351 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.706407 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.706427 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.706451 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.706467 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:47Z","lastTransitionTime":"2026-02-18T19:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.808702 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.808764 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.808786 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.808809 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.808826 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:47Z","lastTransitionTime":"2026-02-18T19:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.911817 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.911879 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.911893 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.911912 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.911927 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:47Z","lastTransitionTime":"2026-02-18T19:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.950444 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.962345 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.977513 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:47 crc kubenswrapper[4932]: I0218 19:34:47.994790 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.014033 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.015778 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.016099 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.016550 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.016588 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.016603 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:48Z","lastTransitionTime":"2026-02-18T19:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.027883 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.040929 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.056787 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.078620 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.097004 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.117922 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.120806 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.121025 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.121232 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.121415 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.121574 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:48Z","lastTransitionTime":"2026-02-18T19:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.132576 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.150571 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.155156 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 09:12:31.388305587 +0000 UTC Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.178675 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:48 crc kubenswrapper[4932]: E0218 19:34:48.178901 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.180837 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:31Z\\\",\\\"message\\\":\\\"31.580806 6358 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 19:34:31.580833 6358 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:34:31.580855 6358 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:34:31.580866 6358 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:34:31.580871 6358 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:34:31.580921 6358 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:34:31.580946 6358 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:34:31.580904 6358 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:34:31.581327 6358 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:34:31.581392 6358 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:34:31.581432 6358 factory.go:656] Stopping watch factory\\\\nI0218 19:34:31.581456 6358 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:34:31.581506 6358 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:34:31.581541 6358 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0218 19:34:31.581561 6358 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nF0218 19:34:31.581772 6358 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.194849 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.212710 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.224384 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.224445 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.224467 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.224494 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.224511 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:48Z","lastTransitionTime":"2026-02-18T19:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.226905 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.242554 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.327469 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.327502 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.327513 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.327529 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.327541 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:48Z","lastTransitionTime":"2026-02-18T19:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.434152 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.435017 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.435038 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.435067 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.435086 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:48Z","lastTransitionTime":"2026-02-18T19:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.537746 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.538411 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.538621 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.538789 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.538944 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:48Z","lastTransitionTime":"2026-02-18T19:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.641245 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.641276 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.641284 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.641300 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.641309 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:48Z","lastTransitionTime":"2026-02-18T19:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.744480 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.744547 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.744569 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.744594 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.744612 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:48Z","lastTransitionTime":"2026-02-18T19:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.846810 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.846872 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.846890 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.846915 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.846933 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:48Z","lastTransitionTime":"2026-02-18T19:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.914738 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.914815 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.914882 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:48 crc kubenswrapper[4932]: E0218 19:34:48.914904 4932 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.914921 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:48 crc kubenswrapper[4932]: E0218 19:34:48.915022 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:35:20.914992937 +0000 UTC m=+84.496947822 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:34:48 crc kubenswrapper[4932]: E0218 19:34:48.915054 4932 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:34:48 crc kubenswrapper[4932]: E0218 19:34:48.915122 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:34:48 crc kubenswrapper[4932]: E0218 19:34:48.915164 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:34:48 crc kubenswrapper[4932]: E0218 19:34:48.915214 4932 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:48 crc kubenswrapper[4932]: E0218 19:34:48.915167 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:35:20.91513678 +0000 UTC m=+84.497091685 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:34:48 crc kubenswrapper[4932]: E0218 19:34:48.915281 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:34:48 crc kubenswrapper[4932]: E0218 19:34:48.915319 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:35:20.915293293 +0000 UTC m=+84.497248178 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:48 crc kubenswrapper[4932]: E0218 19:34:48.915334 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:34:48 crc kubenswrapper[4932]: E0218 19:34:48.915353 4932 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:48 crc kubenswrapper[4932]: E0218 19:34:48.915437 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:35:20.915407556 +0000 UTC m=+84.497362411 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.950513 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.950612 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.950639 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.950671 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:48 crc kubenswrapper[4932]: I0218 19:34:48.950695 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:48Z","lastTransitionTime":"2026-02-18T19:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.015995 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:34:49 crc kubenswrapper[4932]: E0218 19:34:49.016273 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:35:21.016233358 +0000 UTC m=+84.598188243 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.053115 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.053454 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.053479 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.053497 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.053513 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:49Z","lastTransitionTime":"2026-02-18T19:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.155771 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 05:37:00.074805985 +0000 UTC Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.156286 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.156340 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.156363 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.156397 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.156420 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:49Z","lastTransitionTime":"2026-02-18T19:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.178933 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.178979 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:49 crc kubenswrapper[4932]: E0218 19:34:49.179141 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.179279 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:49 crc kubenswrapper[4932]: E0218 19:34:49.179330 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:34:49 crc kubenswrapper[4932]: E0218 19:34:49.179517 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.259231 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.259288 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.259305 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.259328 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.259346 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:49Z","lastTransitionTime":"2026-02-18T19:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.362503 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.362595 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.362619 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.362651 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.362673 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:49Z","lastTransitionTime":"2026-02-18T19:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.466002 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.466102 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.466123 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.466150 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.466200 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:49Z","lastTransitionTime":"2026-02-18T19:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.568716 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.568809 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.568835 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.568867 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.568888 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:49Z","lastTransitionTime":"2026-02-18T19:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.671241 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.671292 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.671306 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.671322 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.671335 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:49Z","lastTransitionTime":"2026-02-18T19:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.773820 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.773887 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.773908 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.773932 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.773954 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:49Z","lastTransitionTime":"2026-02-18T19:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.877350 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.877424 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.877453 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.877481 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.877502 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:49Z","lastTransitionTime":"2026-02-18T19:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.980896 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.980992 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.981013 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.981096 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:49 crc kubenswrapper[4932]: I0218 19:34:49.981155 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:49Z","lastTransitionTime":"2026-02-18T19:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.083588 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.083675 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.083693 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.083713 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.083729 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:50Z","lastTransitionTime":"2026-02-18T19:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.156253 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 07:31:11.709480076 +0000 UTC Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.178670 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:50 crc kubenswrapper[4932]: E0218 19:34:50.178805 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.180002 4932 scope.go:117] "RemoveContainer" containerID="e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.186045 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.186085 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.186101 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.186118 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.186132 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:50Z","lastTransitionTime":"2026-02-18T19:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.289145 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.289423 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.289436 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.289453 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.289467 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:50Z","lastTransitionTime":"2026-02-18T19:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.391779 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.391835 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.391850 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.391867 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.391934 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:50Z","lastTransitionTime":"2026-02-18T19:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.494789 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.494874 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.494893 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.494916 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.494933 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:50Z","lastTransitionTime":"2026-02-18T19:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.597936 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.598009 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.598033 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.598065 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.598086 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:50Z","lastTransitionTime":"2026-02-18T19:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.648400 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovnkube-controller/1.log" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.652257 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerStarted","Data":"2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a"} Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.653552 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.675419 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.700861 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.700965 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.700991 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.701023 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.701049 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:50Z","lastTransitionTime":"2026-02-18T19:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.708748 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:31Z\\\",\\\"message\\\":\\\"31.580806 6358 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 19:34:31.580833 6358 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:34:31.580855 6358 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:34:31.580866 6358 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:34:31.580871 6358 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:34:31.580921 6358 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:34:31.580946 6358 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:34:31.580904 6358 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:34:31.581327 6358 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:34:31.581392 6358 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:34:31.581432 6358 factory.go:656] Stopping watch factory\\\\nI0218 19:34:31.581456 6358 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:34:31.581506 6358 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:34:31.581541 6358 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0218 19:34:31.581561 6358 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nF0218 19:34:31.581772 6358 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.732226 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.749007 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31b1deb-52e4-4a2b-84d2-7263235a9614\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d49b33110c005074c926cb27774369c1aa68dbc56d47ed3fa29456a5b5e672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efe5587ce56ca0dce30a3e010094421a89f4f6713c04baa601f96d1d5919248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://381008993cfc5f59da6f8dc90f823fbbb1ab84e53aa86978152d00b078452802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.781343 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.797377 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.803707 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.803748 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.803761 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.803780 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.803792 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:50Z","lastTransitionTime":"2026-02-18T19:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.813753 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.831443 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.845628 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.857423 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.879989 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.893723 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.905850 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.905879 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.905887 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.905900 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.905909 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:50Z","lastTransitionTime":"2026-02-18T19:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.912812 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.925723 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.943579 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.961698 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:50 crc kubenswrapper[4932]: I0218 19:34:50.981628 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.008859 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.008887 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.008899 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.008911 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.008921 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:51Z","lastTransitionTime":"2026-02-18T19:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.040082 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs\") pod \"network-metrics-daemon-kdjbt\" (UID: \"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\") " pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:51 crc kubenswrapper[4932]: E0218 19:34:51.040282 4932 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:34:51 crc kubenswrapper[4932]: E0218 19:34:51.040352 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs podName:1d73072e-7e9b-4ae7-92ca-5950da33ed6c nodeName:}" failed. No retries permitted until 2026-02-18 19:35:07.04033395 +0000 UTC m=+70.622288815 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs") pod "network-metrics-daemon-kdjbt" (UID: "1d73072e-7e9b-4ae7-92ca-5950da33ed6c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.111709 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.111776 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.111789 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.111810 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.111824 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:51Z","lastTransitionTime":"2026-02-18T19:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.157051 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 08:32:24.484225764 +0000 UTC Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.178488 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.178611 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.178700 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:51 crc kubenswrapper[4932]: E0218 19:34:51.178634 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:51 crc kubenswrapper[4932]: E0218 19:34:51.178814 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:34:51 crc kubenswrapper[4932]: E0218 19:34:51.178933 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.215339 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.215406 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.215423 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.215449 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.215468 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:51Z","lastTransitionTime":"2026-02-18T19:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.317947 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.317991 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.318002 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.318019 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.318033 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:51Z","lastTransitionTime":"2026-02-18T19:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.421052 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.421131 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.421150 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.421200 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.421216 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:51Z","lastTransitionTime":"2026-02-18T19:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.524651 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.524726 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.524754 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.524785 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.524807 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:51Z","lastTransitionTime":"2026-02-18T19:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.628949 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.629028 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.629052 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.629080 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.629099 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:51Z","lastTransitionTime":"2026-02-18T19:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.658525 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovnkube-controller/2.log" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.659446 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovnkube-controller/1.log" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.663131 4932 generic.go:334] "Generic (PLEG): container finished" podID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerID="2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a" exitCode=1 Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.663232 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerDied","Data":"2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a"} Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.663284 4932 scope.go:117] "RemoveContainer" containerID="e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.664157 4932 scope.go:117] "RemoveContainer" containerID="2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a" Feb 18 19:34:51 crc kubenswrapper[4932]: E0218 19:34:51.664425 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.687120 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.706614 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.730423 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.732627 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.732669 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.732687 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.732710 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.732727 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:51Z","lastTransitionTime":"2026-02-18T19:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.752743 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.772354 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.797487 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.821130 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.836258 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.836327 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.836346 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.836373 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.836396 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:51Z","lastTransitionTime":"2026-02-18T19:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.843585 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.860200 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.873493 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.889841 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.907971 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.926722 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.939883 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.939954 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.939980 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.940011 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.940031 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:51Z","lastTransitionTime":"2026-02-18T19:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.959585 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e137d2fc2040902d589dcd7dc476d5f0adac2cbcd4d9cd86493d8988232494a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:31Z\\\",\\\"message\\\":\\\"31.580806 6358 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 19:34:31.580833 6358 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:34:31.580855 6358 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:34:31.580866 6358 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:34:31.580871 6358 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:34:31.580921 6358 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:34:31.580946 6358 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:34:31.580904 6358 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:34:31.581327 6358 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:34:31.581392 6358 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:34:31.581432 6358 factory.go:656] Stopping watch factory\\\\nI0218 19:34:31.581456 6358 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:34:31.581506 6358 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:34:31.581541 6358 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0218 19:34:31.581561 6358 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nF0218 19:34:31.581772 6358 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:51Z\\\",\\\"message\\\":\\\"t:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230768 6595 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nI0218 19:34:51.230812 6595 services_controller.go:360] Finished syncing service networking-console-plugin on namespace openshift-network-console for network=default : 4.323485ms\\\\nI0218 19:34:51.230792 6595 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230858 6595 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-operator for network=default\\\\nF0218 19:34:51.230890 6595 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.975389 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:51 crc kubenswrapper[4932]: I0218 19:34:51.994545 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31b1deb-52e4-4a2b-84d2-7263235a9614\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d49b33110c005074c926cb27774369c1aa68dbc56d47ed3fa29456a5b5e672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efe5587ce56ca0dce30a3e010094421a89f4f6713c04baa601f96d1d5919248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://381008993cfc5f59da6f8dc90f823fbbb1ab84e53aa86978152d00b078452802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.011917 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.043110 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.043206 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.043224 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.043246 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.043264 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:52Z","lastTransitionTime":"2026-02-18T19:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.147739 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.147789 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.147801 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.147823 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.147840 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:52Z","lastTransitionTime":"2026-02-18T19:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.157539 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 02:53:12.629093134 +0000 UTC Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.179002 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:52 crc kubenswrapper[4932]: E0218 19:34:52.179281 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.251079 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.251441 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.251531 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.251654 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.251928 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:52Z","lastTransitionTime":"2026-02-18T19:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.355661 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.355739 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.355763 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.355793 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.355820 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:52Z","lastTransitionTime":"2026-02-18T19:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.459938 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.460420 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.460589 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.460771 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.460912 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:52Z","lastTransitionTime":"2026-02-18T19:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.564579 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.564634 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.564645 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.564663 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.564675 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:52Z","lastTransitionTime":"2026-02-18T19:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.667063 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.667129 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.667148 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.667203 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.667222 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:52Z","lastTransitionTime":"2026-02-18T19:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.670897 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovnkube-controller/2.log" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.675808 4932 scope.go:117] "RemoveContainer" containerID="2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a" Feb 18 19:34:52 crc kubenswrapper[4932]: E0218 19:34:52.676116 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.699367 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.718211 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.737313 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.757351 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.770381 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.770718 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.770910 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.771071 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.771260 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:52Z","lastTransitionTime":"2026-02-18T19:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.777982 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.798235 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.825536 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.846697 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.872424 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.874549 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.874611 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.874633 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.874664 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.874687 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:52Z","lastTransitionTime":"2026-02-18T19:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.895112 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.916392 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.938996 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.956369 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.976602 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.978864 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.979119 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.979341 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.979754 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:52 crc kubenswrapper[4932]: I0218 19:34:52.979939 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:52Z","lastTransitionTime":"2026-02-18T19:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.002219 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:51Z\\\",\\\"message\\\":\\\"t:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230768 6595 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nI0218 19:34:51.230812 6595 services_controller.go:360] Finished syncing service networking-console-plugin on namespace openshift-network-console for network=default : 4.323485ms\\\\nI0218 19:34:51.230792 6595 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230858 6595 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-operator for network=default\\\\nF0218 19:34:51.230890 6595 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.017951 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.042659 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31b1deb-52e4-4a2b-84d2-7263235a9614\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d49b33110c005074c926cb27774369c1aa68dbc56d47ed3fa29456a5b5e672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efe5587ce56ca0dce30a3e010094421a89f4f6713c04baa601f96d1d5919248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://381008993cfc5f59da6f8dc90f823fbbb1ab84e53aa86978152d00b078452802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.083086 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.083461 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.083587 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.083689 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.083786 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:53Z","lastTransitionTime":"2026-02-18T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.158123 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:01:13.019631869 +0000 UTC Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.178904 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.179000 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.179010 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:53 crc kubenswrapper[4932]: E0218 19:34:53.179078 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:34:53 crc kubenswrapper[4932]: E0218 19:34:53.179256 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:53 crc kubenswrapper[4932]: E0218 19:34:53.179495 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.186080 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.186117 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.186128 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.186142 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.186155 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:53Z","lastTransitionTime":"2026-02-18T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.187477 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.187546 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.187572 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.187606 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.187630 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:53Z","lastTransitionTime":"2026-02-18T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:53 crc kubenswrapper[4932]: E0218 19:34:53.210037 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.215025 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.215259 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.215446 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.215595 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.215722 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:53Z","lastTransitionTime":"2026-02-18T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:53 crc kubenswrapper[4932]: E0218 19:34:53.235562 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.240420 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.240724 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.240957 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.241111 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.241330 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:53Z","lastTransitionTime":"2026-02-18T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:53 crc kubenswrapper[4932]: E0218 19:34:53.261670 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.266613 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.266661 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.266674 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.266690 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.266702 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:53Z","lastTransitionTime":"2026-02-18T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:53 crc kubenswrapper[4932]: E0218 19:34:53.288441 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.293404 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.293790 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.294241 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.294602 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.294947 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:53Z","lastTransitionTime":"2026-02-18T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:53 crc kubenswrapper[4932]: E0218 19:34:53.314549 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:53 crc kubenswrapper[4932]: E0218 19:34:53.314773 4932 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.317327 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.317368 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.317386 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.317410 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.317429 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:53Z","lastTransitionTime":"2026-02-18T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.420776 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.420840 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.420857 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.420881 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.420899 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:53Z","lastTransitionTime":"2026-02-18T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.523994 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.524085 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.524112 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.524144 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.524168 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:53Z","lastTransitionTime":"2026-02-18T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.627777 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.627877 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.627911 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.627937 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.627955 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:53Z","lastTransitionTime":"2026-02-18T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.732087 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.732161 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.732241 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.732339 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.732364 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:53Z","lastTransitionTime":"2026-02-18T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.835694 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.835742 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.835758 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.835780 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.835799 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:53Z","lastTransitionTime":"2026-02-18T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.938937 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.939048 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.939066 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.939090 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:53 crc kubenswrapper[4932]: I0218 19:34:53.939107 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:53Z","lastTransitionTime":"2026-02-18T19:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.042978 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.043059 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.043083 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.043113 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.043144 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:54Z","lastTransitionTime":"2026-02-18T19:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.146855 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.146934 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.146950 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.146976 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.146995 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:54Z","lastTransitionTime":"2026-02-18T19:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.159597 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 02:46:30.673445297 +0000 UTC Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.178304 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:54 crc kubenswrapper[4932]: E0218 19:34:54.178486 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.250351 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.250422 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.250439 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.250465 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.250489 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:54Z","lastTransitionTime":"2026-02-18T19:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.353464 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.353520 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.353536 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.353559 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.353574 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:54Z","lastTransitionTime":"2026-02-18T19:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.456107 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.456197 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.456217 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.456246 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.456266 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:54Z","lastTransitionTime":"2026-02-18T19:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.559974 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.560061 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.560080 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.560111 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.560130 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:54Z","lastTransitionTime":"2026-02-18T19:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.663348 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.663409 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.663428 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.663454 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.663476 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:54Z","lastTransitionTime":"2026-02-18T19:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.766789 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.766848 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.766866 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.766892 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.766910 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:54Z","lastTransitionTime":"2026-02-18T19:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.870573 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.870655 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.870679 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.870706 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.870724 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:54Z","lastTransitionTime":"2026-02-18T19:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.973926 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.973998 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.974020 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.974048 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:54 crc kubenswrapper[4932]: I0218 19:34:54.974070 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:54Z","lastTransitionTime":"2026-02-18T19:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.077554 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.077617 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.077634 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.077659 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.077677 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:55Z","lastTransitionTime":"2026-02-18T19:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.159698 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 12:32:28.914349544 +0000 UTC Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.178498 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.178579 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.178589 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:55 crc kubenswrapper[4932]: E0218 19:34:55.178690 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:55 crc kubenswrapper[4932]: E0218 19:34:55.178856 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:55 crc kubenswrapper[4932]: E0218 19:34:55.178990 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.180543 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.180596 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.180614 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.180639 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.180657 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:55Z","lastTransitionTime":"2026-02-18T19:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.283069 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.283137 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.283155 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.283212 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.283237 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:55Z","lastTransitionTime":"2026-02-18T19:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.386803 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.386860 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.386882 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.386908 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.386925 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:55Z","lastTransitionTime":"2026-02-18T19:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.489838 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.489983 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.490001 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.490031 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.490052 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:55Z","lastTransitionTime":"2026-02-18T19:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.593000 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.593066 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.593229 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.593260 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.593280 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:55Z","lastTransitionTime":"2026-02-18T19:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.696326 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.696392 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.696409 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.696435 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.696453 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:55Z","lastTransitionTime":"2026-02-18T19:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.845056 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.845111 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.845129 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.845152 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.845170 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:55Z","lastTransitionTime":"2026-02-18T19:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.948230 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.948283 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.948300 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.948321 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:55 crc kubenswrapper[4932]: I0218 19:34:55.948336 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:55Z","lastTransitionTime":"2026-02-18T19:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.052104 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.052246 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.052273 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.052305 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.052344 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:56Z","lastTransitionTime":"2026-02-18T19:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.155575 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.155689 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.155715 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.155749 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.155774 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:56Z","lastTransitionTime":"2026-02-18T19:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.160865 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 18:06:06.537281537 +0000 UTC Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.178529 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:56 crc kubenswrapper[4932]: E0218 19:34:56.178725 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.258951 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.259022 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.259047 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.259074 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.259096 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:56Z","lastTransitionTime":"2026-02-18T19:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.361877 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.361952 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.361969 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.361991 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.362018 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:56Z","lastTransitionTime":"2026-02-18T19:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.465267 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.465330 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.465352 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.465376 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.465392 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:56Z","lastTransitionTime":"2026-02-18T19:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.568434 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.568477 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.568493 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.568513 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.568530 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:56Z","lastTransitionTime":"2026-02-18T19:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.670583 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.670629 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.670647 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.670666 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.670681 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:56Z","lastTransitionTime":"2026-02-18T19:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.773802 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.773867 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.773885 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.773918 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.773937 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:56Z","lastTransitionTime":"2026-02-18T19:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.876855 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.876912 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.876928 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.876951 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.876966 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:56Z","lastTransitionTime":"2026-02-18T19:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.980072 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.980164 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.980239 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.980281 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:56 crc kubenswrapper[4932]: I0218 19:34:56.980306 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:56Z","lastTransitionTime":"2026-02-18T19:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.082876 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.082969 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.082987 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.083012 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.083028 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:57Z","lastTransitionTime":"2026-02-18T19:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.161124 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 06:35:30.943459628 +0000 UTC Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.178995 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:57 crc kubenswrapper[4932]: E0218 19:34:57.179128 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.179386 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.179511 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:57 crc kubenswrapper[4932]: E0218 19:34:57.179620 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:57 crc kubenswrapper[4932]: E0218 19:34:57.179757 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.186353 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.186408 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.186426 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.186450 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.186468 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:57Z","lastTransitionTime":"2026-02-18T19:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.200573 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.218553 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.237200 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.255154 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.273901 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.289496 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.289767 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.289970 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.290128 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.290336 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:57Z","lastTransitionTime":"2026-02-18T19:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.292526 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.312380 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.331132 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.345890 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.357854 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.368788 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31b1deb-52e4-4a2b-84d2-7263235a9614\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d49b33110c005074c926cb27774369c1aa68dbc56d47ed3fa29456a5b5e672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efe5587ce56ca0dce30a3e010094421a89f4f6713c04baa601f96d1d5919248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://381008993cfc5f59da6f8dc90f823fbbb1ab84e53aa86978152d00b078452802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.381515 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.394514 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.394559 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.394570 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.394586 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.394600 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:57Z","lastTransitionTime":"2026-02-18T19:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.401988 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:51Z\\\",\\\"message\\\":\\\"t:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230768 6595 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nI0218 19:34:51.230812 6595 services_controller.go:360] Finished syncing service networking-console-plugin on namespace openshift-network-console for network=default : 4.323485ms\\\\nI0218 19:34:51.230792 6595 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230858 6595 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-operator for network=default\\\\nF0218 19:34:51.230890 6595 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.415690 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.432865 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.450393 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.467124 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:34:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.501133 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.501218 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.501237 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.501256 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.501269 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:57Z","lastTransitionTime":"2026-02-18T19:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.608409 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.608788 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.608892 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.609005 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.609107 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:57Z","lastTransitionTime":"2026-02-18T19:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.711548 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.711699 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.711731 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.711779 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.711807 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:57Z","lastTransitionTime":"2026-02-18T19:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.814898 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.814963 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.814981 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.815005 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.815023 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:57Z","lastTransitionTime":"2026-02-18T19:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.917838 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.917919 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.917941 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.917972 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:57 crc kubenswrapper[4932]: I0218 19:34:57.917990 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:57Z","lastTransitionTime":"2026-02-18T19:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.021562 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.021625 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.021643 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.021666 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.021684 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:58Z","lastTransitionTime":"2026-02-18T19:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.124697 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.124764 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.124787 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.124817 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.124839 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:58Z","lastTransitionTime":"2026-02-18T19:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.161528 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 13:10:43.680088932 +0000 UTC Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.178220 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:34:58 crc kubenswrapper[4932]: E0218 19:34:58.178413 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.227831 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.227898 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.227915 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.227940 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.227961 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:58Z","lastTransitionTime":"2026-02-18T19:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.331397 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.331489 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.331514 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.331545 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.331570 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:58Z","lastTransitionTime":"2026-02-18T19:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.435112 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.435257 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.435275 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.435299 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.435317 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:58Z","lastTransitionTime":"2026-02-18T19:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.538646 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.538711 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.538733 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.538763 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.538783 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:58Z","lastTransitionTime":"2026-02-18T19:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.642101 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.642198 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.642219 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.642246 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.642301 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:58Z","lastTransitionTime":"2026-02-18T19:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.745008 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.745090 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.745117 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.745149 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.745170 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:58Z","lastTransitionTime":"2026-02-18T19:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.848478 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.848847 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.848987 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.849165 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.849332 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:58Z","lastTransitionTime":"2026-02-18T19:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.952898 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.953418 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.953619 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.953863 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:58 crc kubenswrapper[4932]: I0218 19:34:58.954044 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:58Z","lastTransitionTime":"2026-02-18T19:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.057730 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.057792 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.057812 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.057846 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.057887 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:59Z","lastTransitionTime":"2026-02-18T19:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.161229 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.161356 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.161381 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.161597 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.161628 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:59Z","lastTransitionTime":"2026-02-18T19:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.161864 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 17:36:58.566752194 +0000 UTC Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.178337 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:34:59 crc kubenswrapper[4932]: E0218 19:34:59.178518 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.178801 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:34:59 crc kubenswrapper[4932]: E0218 19:34:59.178907 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.179367 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:34:59 crc kubenswrapper[4932]: E0218 19:34:59.179477 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.264844 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.264896 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.264916 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.264940 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.264958 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:59Z","lastTransitionTime":"2026-02-18T19:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.368721 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.368777 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.368794 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.368818 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.368836 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:59Z","lastTransitionTime":"2026-02-18T19:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.472108 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.472192 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.472221 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.472265 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.472290 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:59Z","lastTransitionTime":"2026-02-18T19:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.575574 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.575639 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.575661 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.575687 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.575704 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:59Z","lastTransitionTime":"2026-02-18T19:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.678225 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.678265 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.678283 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.678305 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.678322 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:59Z","lastTransitionTime":"2026-02-18T19:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.781791 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.781847 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.781863 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.781887 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.781904 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:59Z","lastTransitionTime":"2026-02-18T19:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.884638 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.884693 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.884711 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.884734 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.884751 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:59Z","lastTransitionTime":"2026-02-18T19:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.989049 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.989246 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.989278 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.989321 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:34:59 crc kubenswrapper[4932]: I0218 19:34:59.989344 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:34:59Z","lastTransitionTime":"2026-02-18T19:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.094068 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.094149 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.094170 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.094224 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.094253 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:00Z","lastTransitionTime":"2026-02-18T19:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.162918 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 10:35:47.360175631 +0000 UTC Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.178890 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:00 crc kubenswrapper[4932]: E0218 19:35:00.180047 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.197493 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.197563 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.197582 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.197608 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.197628 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:00Z","lastTransitionTime":"2026-02-18T19:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.301252 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.301290 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.301301 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.301317 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.301327 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:00Z","lastTransitionTime":"2026-02-18T19:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.404332 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.404411 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.404430 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.404480 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.404498 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:00Z","lastTransitionTime":"2026-02-18T19:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.507913 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.508321 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.508468 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.508599 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.508742 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:00Z","lastTransitionTime":"2026-02-18T19:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.611980 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.612060 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.612083 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.612115 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.612137 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:00Z","lastTransitionTime":"2026-02-18T19:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.717637 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.717735 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.717765 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.717800 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.717827 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:00Z","lastTransitionTime":"2026-02-18T19:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.820922 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.821246 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.821448 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.821665 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.821839 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:00Z","lastTransitionTime":"2026-02-18T19:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.925136 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.925242 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.925270 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.925300 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:00 crc kubenswrapper[4932]: I0218 19:35:00.925323 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:00Z","lastTransitionTime":"2026-02-18T19:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.027844 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.027903 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.027921 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.027944 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.027962 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:01Z","lastTransitionTime":"2026-02-18T19:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.130792 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.130826 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.130835 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.130850 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.130859 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:01Z","lastTransitionTime":"2026-02-18T19:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.164288 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 07:51:06.271774056 +0000 UTC Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.178897 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.178937 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:01 crc kubenswrapper[4932]: E0218 19:35:01.179314 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.178967 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:01 crc kubenswrapper[4932]: E0218 19:35:01.179612 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:01 crc kubenswrapper[4932]: E0218 19:35:01.179750 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.233402 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.233440 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.233450 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.233464 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.233477 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:01Z","lastTransitionTime":"2026-02-18T19:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.335783 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.335815 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.335826 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.335842 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.335853 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:01Z","lastTransitionTime":"2026-02-18T19:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.438819 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.438866 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.438878 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.438897 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.438911 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:01Z","lastTransitionTime":"2026-02-18T19:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.541896 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.541937 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.541947 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.541965 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.541978 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:01Z","lastTransitionTime":"2026-02-18T19:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.644841 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.644918 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.644932 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.644951 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.644963 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:01Z","lastTransitionTime":"2026-02-18T19:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.747569 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.747614 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.747627 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.747644 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.747656 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:01Z","lastTransitionTime":"2026-02-18T19:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.852476 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.852540 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.852558 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.852583 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.852601 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:01Z","lastTransitionTime":"2026-02-18T19:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.955014 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.955056 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.955064 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.955080 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:01 crc kubenswrapper[4932]: I0218 19:35:01.955089 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:01Z","lastTransitionTime":"2026-02-18T19:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.057542 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.057578 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.057586 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.057599 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.057611 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:02Z","lastTransitionTime":"2026-02-18T19:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.160616 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.160661 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.160696 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.160715 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.160727 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:02Z","lastTransitionTime":"2026-02-18T19:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.165023 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 08:34:22.016490115 +0000 UTC Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.178327 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:02 crc kubenswrapper[4932]: E0218 19:35:02.178446 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.263804 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.263857 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.263874 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.263898 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.263916 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:02Z","lastTransitionTime":"2026-02-18T19:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.365574 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.365609 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.365616 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.365630 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.365639 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:02Z","lastTransitionTime":"2026-02-18T19:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.467921 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.467975 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.467992 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.468016 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.468033 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:02Z","lastTransitionTime":"2026-02-18T19:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.570633 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.570679 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.570691 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.570706 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.570720 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:02Z","lastTransitionTime":"2026-02-18T19:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.673248 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.673281 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.673294 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.673308 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.673319 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:02Z","lastTransitionTime":"2026-02-18T19:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.775627 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.775710 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.775735 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.775765 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.775788 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:02Z","lastTransitionTime":"2026-02-18T19:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.878500 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.878571 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.878591 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.878635 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.878664 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:02Z","lastTransitionTime":"2026-02-18T19:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.982130 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.982515 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.982694 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.982874 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:02 crc kubenswrapper[4932]: I0218 19:35:02.983053 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:02Z","lastTransitionTime":"2026-02-18T19:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.086302 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.086344 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.086353 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.086369 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.086381 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:03Z","lastTransitionTime":"2026-02-18T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.165617 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 00:29:32.877209175 +0000 UTC Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.179450 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.179534 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:03 crc kubenswrapper[4932]: E0218 19:35:03.179643 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.179669 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:03 crc kubenswrapper[4932]: E0218 19:35:03.179729 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:03 crc kubenswrapper[4932]: E0218 19:35:03.179913 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.188691 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.188739 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.188756 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.188779 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.188795 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:03Z","lastTransitionTime":"2026-02-18T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.290813 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.290856 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.290867 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.290885 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.290896 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:03Z","lastTransitionTime":"2026-02-18T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.392889 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.392961 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.392984 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.393012 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.393033 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:03Z","lastTransitionTime":"2026-02-18T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.494701 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.494734 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.494742 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.494758 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.494769 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:03Z","lastTransitionTime":"2026-02-18T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.495635 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.495832 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.496071 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.496353 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.496392 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:03Z","lastTransitionTime":"2026-02-18T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:03 crc kubenswrapper[4932]: E0218 19:35:03.518862 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:03Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.522885 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.522914 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.522921 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.522933 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.522941 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:03Z","lastTransitionTime":"2026-02-18T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:03 crc kubenswrapper[4932]: E0218 19:35:03.536050 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:03Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.539863 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.539897 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.539909 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.539923 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.539935 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:03Z","lastTransitionTime":"2026-02-18T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:03 crc kubenswrapper[4932]: E0218 19:35:03.554272 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:03Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.557828 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.557878 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.557895 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.557919 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.557936 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:03Z","lastTransitionTime":"2026-02-18T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:03 crc kubenswrapper[4932]: E0218 19:35:03.570291 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:03Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.574147 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.574220 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.574237 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.574260 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.574278 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:03Z","lastTransitionTime":"2026-02-18T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:03 crc kubenswrapper[4932]: E0218 19:35:03.588857 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:03Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:03 crc kubenswrapper[4932]: E0218 19:35:03.589072 4932 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.597461 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.597492 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.597503 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.597520 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.597531 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:03Z","lastTransitionTime":"2026-02-18T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.700136 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.700207 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.700234 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.700254 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.700269 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:03Z","lastTransitionTime":"2026-02-18T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.802241 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.802296 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.802312 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.802335 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.802353 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:03Z","lastTransitionTime":"2026-02-18T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.904203 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.904242 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.904253 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.904270 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:03 crc kubenswrapper[4932]: I0218 19:35:03.904295 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:03Z","lastTransitionTime":"2026-02-18T19:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.006822 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.006870 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.006886 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.006907 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.006923 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:04Z","lastTransitionTime":"2026-02-18T19:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.109390 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.109424 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.109434 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.109449 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.109459 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:04Z","lastTransitionTime":"2026-02-18T19:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.166634 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 02:28:14.949291408 +0000 UTC Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.178374 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:04 crc kubenswrapper[4932]: E0218 19:35:04.178464 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.179465 4932 scope.go:117] "RemoveContainer" containerID="2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a" Feb 18 19:35:04 crc kubenswrapper[4932]: E0218 19:35:04.179699 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.211133 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.211223 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.211247 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.211273 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.211289 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:04Z","lastTransitionTime":"2026-02-18T19:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.313833 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.313889 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.313905 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.313928 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.313946 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:04Z","lastTransitionTime":"2026-02-18T19:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.416249 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.416279 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.416286 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.416301 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.416311 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:04Z","lastTransitionTime":"2026-02-18T19:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.518863 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.519001 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.519059 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.519120 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.519204 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:04Z","lastTransitionTime":"2026-02-18T19:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.620834 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.620869 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.620881 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.620895 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.620905 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:04Z","lastTransitionTime":"2026-02-18T19:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.722822 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.722847 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.722857 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.722870 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.722880 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:04Z","lastTransitionTime":"2026-02-18T19:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.827933 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.827989 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.828013 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.828041 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.828064 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:04Z","lastTransitionTime":"2026-02-18T19:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.930115 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.930149 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.930162 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.930177 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:04 crc kubenswrapper[4932]: I0218 19:35:04.930196 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:04Z","lastTransitionTime":"2026-02-18T19:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.031758 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.031793 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.031803 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.031820 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.031828 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:05Z","lastTransitionTime":"2026-02-18T19:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.133469 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.133508 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.133515 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.133527 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.133536 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:05Z","lastTransitionTime":"2026-02-18T19:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.167091 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 05:58:31.666527356 +0000 UTC Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.178572 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:05 crc kubenswrapper[4932]: E0218 19:35:05.178660 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.178700 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:05 crc kubenswrapper[4932]: E0218 19:35:05.178748 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.178773 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:05 crc kubenswrapper[4932]: E0218 19:35:05.178809 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.235776 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.235866 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.235889 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.235926 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.235949 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:05Z","lastTransitionTime":"2026-02-18T19:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.338079 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.338142 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.338154 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.338181 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.338193 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:05Z","lastTransitionTime":"2026-02-18T19:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.440547 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.440585 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.440596 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.440609 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.440619 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:05Z","lastTransitionTime":"2026-02-18T19:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.542839 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.542896 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.542909 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.542923 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.542935 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:05Z","lastTransitionTime":"2026-02-18T19:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.645505 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.645539 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.645548 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.645560 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.645570 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:05Z","lastTransitionTime":"2026-02-18T19:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.747854 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.747881 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.747891 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.747903 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.747911 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:05Z","lastTransitionTime":"2026-02-18T19:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.850236 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.850266 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.850273 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.850286 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.850295 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:05Z","lastTransitionTime":"2026-02-18T19:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.952031 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.952058 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.952066 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.952077 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:05 crc kubenswrapper[4932]: I0218 19:35:05.952087 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:05Z","lastTransitionTime":"2026-02-18T19:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.053893 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.053945 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.053953 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.053967 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.053976 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:06Z","lastTransitionTime":"2026-02-18T19:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.156040 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.156108 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.156117 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.156131 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.156546 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:06Z","lastTransitionTime":"2026-02-18T19:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.167298 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 10:50:23.895432682 +0000 UTC Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.178764 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:06 crc kubenswrapper[4932]: E0218 19:35:06.178932 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.259285 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.259359 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.259377 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.259413 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.259436 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:06Z","lastTransitionTime":"2026-02-18T19:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.362271 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.362310 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.362318 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.362331 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.362339 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:06Z","lastTransitionTime":"2026-02-18T19:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.464355 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.464417 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.464434 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.464459 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.464475 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:06Z","lastTransitionTime":"2026-02-18T19:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.567217 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.567265 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.567277 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.567292 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.567304 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:06Z","lastTransitionTime":"2026-02-18T19:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.669621 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.669894 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.669961 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.670024 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.670088 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:06Z","lastTransitionTime":"2026-02-18T19:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.772529 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.772819 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.772925 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.773026 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.773167 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:06Z","lastTransitionTime":"2026-02-18T19:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.876680 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.876996 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.877130 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.877288 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.877422 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:06Z","lastTransitionTime":"2026-02-18T19:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.980926 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.980991 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.981040 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.981065 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:06 crc kubenswrapper[4932]: I0218 19:35:06.981082 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:06Z","lastTransitionTime":"2026-02-18T19:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.083543 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.083791 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.083884 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.083953 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.084018 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:07Z","lastTransitionTime":"2026-02-18T19:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.104621 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs\") pod \"network-metrics-daemon-kdjbt\" (UID: \"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\") " pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:07 crc kubenswrapper[4932]: E0218 19:35:07.104931 4932 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:35:07 crc kubenswrapper[4932]: E0218 19:35:07.105087 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs podName:1d73072e-7e9b-4ae7-92ca-5950da33ed6c nodeName:}" failed. No retries permitted until 2026-02-18 19:35:39.105061717 +0000 UTC m=+102.687016642 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs") pod "network-metrics-daemon-kdjbt" (UID: "1d73072e-7e9b-4ae7-92ca-5950da33ed6c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.168306 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 02:04:28.541863818 +0000 UTC Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.178673 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.178806 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:07 crc kubenswrapper[4932]: E0218 19:35:07.179187 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.179251 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:07 crc kubenswrapper[4932]: E0218 19:35:07.179464 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:07 crc kubenswrapper[4932]: E0218 19:35:07.179341 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.185564 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.185603 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.185616 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.185632 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.185645 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:07Z","lastTransitionTime":"2026-02-18T19:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.192886 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31b1deb-52e4-4a2b-84d2-7263235a9614\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d49b33110c005074c926cb27774369c1aa68dbc56d47ed3fa29456a5b5e672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efe5587ce56ca0dce30a3e010094421a89f4f6713c04baa601f96d1d5919248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://381008993cfc5f59da6f8dc90f823fbbb1ab84e53aa86978152d00b078452802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.207968 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.225327 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:51Z\\\",\\\"message\\\":\\\"t:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230768 6595 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nI0218 19:34:51.230812 6595 services_controller.go:360] Finished syncing service networking-console-plugin on namespace openshift-network-console for network=default : 4.323485ms\\\\nI0218 19:34:51.230792 6595 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230858 6595 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-operator for network=default\\\\nF0218 19:34:51.230890 6595 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.238003 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.254009 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.265225 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.278838 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.286974 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.287818 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.287860 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.287873 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.287889 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.287901 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:07Z","lastTransitionTime":"2026-02-18T19:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.296926 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.307385 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.315639 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.324466 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.335515 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.345198 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.356816 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.367399 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.379994 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.391372 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.391411 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.391420 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.391436 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.391446 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:07Z","lastTransitionTime":"2026-02-18T19:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.494637 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.494677 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.494691 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.494708 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.494720 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:07Z","lastTransitionTime":"2026-02-18T19:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.597309 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.597377 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.597395 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.597419 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.597438 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:07Z","lastTransitionTime":"2026-02-18T19:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.700838 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.701230 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.701508 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.701772 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.701980 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:07Z","lastTransitionTime":"2026-02-18T19:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.805816 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.806221 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.806354 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.806650 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.806790 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:07Z","lastTransitionTime":"2026-02-18T19:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.910559 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.910662 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.910693 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.910723 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:07 crc kubenswrapper[4932]: I0218 19:35:07.910813 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:07Z","lastTransitionTime":"2026-02-18T19:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.014127 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.014201 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.014211 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.014231 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.014244 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:08Z","lastTransitionTime":"2026-02-18T19:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.117577 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.117644 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.117665 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.117692 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.117711 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:08Z","lastTransitionTime":"2026-02-18T19:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.169531 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 23:32:39.753602732 +0000 UTC Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.178196 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:08 crc kubenswrapper[4932]: E0218 19:35:08.178410 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.221248 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.221290 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.221303 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.221326 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.221342 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:08Z","lastTransitionTime":"2026-02-18T19:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.323780 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.323824 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.323836 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.323857 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.323872 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:08Z","lastTransitionTime":"2026-02-18T19:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.427386 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.427443 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.427462 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.427488 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.427509 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:08Z","lastTransitionTime":"2026-02-18T19:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.533491 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.533740 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.533911 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.534114 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.534374 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:08Z","lastTransitionTime":"2026-02-18T19:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.638402 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.638434 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.638446 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.638466 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.638478 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:08Z","lastTransitionTime":"2026-02-18T19:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.734777 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sj8bg_1b8d80e2-307e-43b6-9003-e77eef51e084/kube-multus/0.log" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.734832 4932 generic.go:334] "Generic (PLEG): container finished" podID="1b8d80e2-307e-43b6-9003-e77eef51e084" containerID="e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7" exitCode=1 Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.734868 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sj8bg" event={"ID":"1b8d80e2-307e-43b6-9003-e77eef51e084","Type":"ContainerDied","Data":"e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7"} Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.735331 4932 scope.go:117] "RemoveContainer" containerID="e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.745071 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.745119 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.745137 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.745160 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.745198 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:08Z","lastTransitionTime":"2026-02-18T19:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.757583 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.778350 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:35:08Z\\\",\\\"message\\\":\\\"2026-02-18T19:34:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_db60d678-7f4a-4742-b311-849d4b37080b\\\\n2026-02-18T19:34:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_db60d678-7f4a-4742-b311-849d4b37080b to /host/opt/cni/bin/\\\\n2026-02-18T19:34:22Z [verbose] multus-daemon started\\\\n2026-02-18T19:34:22Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:35:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.803391 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.825358 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.848409 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.848556 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.848573 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.848600 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.848616 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:08Z","lastTransitionTime":"2026-02-18T19:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.850505 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:51Z\\\",\\\"message\\\":\\\"t:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230768 6595 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nI0218 19:34:51.230812 6595 services_controller.go:360] Finished syncing service networking-console-plugin on namespace openshift-network-console for network=default : 4.323485ms\\\\nI0218 19:34:51.230792 6595 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230858 6595 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-operator for network=default\\\\nF0218 19:34:51.230890 6595 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.867095 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.879662 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31b1deb-52e4-4a2b-84d2-7263235a9614\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d49b33110c005074c926cb27774369c1aa68dbc56d47ed3fa29456a5b5e672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efe5587ce56ca0dce30a3e010094421a89f4f6713c04baa601f96d1d5919248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://381008993cfc5f59da6f8dc90f823fbbb1ab84e53aa86978152d00b078452802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.893878 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.907945 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.922980 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.934483 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.950618 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.952259 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.952307 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.952322 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.952342 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.952355 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:08Z","lastTransitionTime":"2026-02-18T19:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.963504 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.979035 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:08 crc kubenswrapper[4932]: I0218 19:35:08.997529 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.012835 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.029804 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.054457 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.054512 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.054524 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.054538 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.054549 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:09Z","lastTransitionTime":"2026-02-18T19:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.156525 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.156561 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.156569 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.156581 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.156591 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:09Z","lastTransitionTime":"2026-02-18T19:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.170711 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 17:54:05.489085946 +0000 UTC Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.179032 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.179075 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.179082 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:09 crc kubenswrapper[4932]: E0218 19:35:09.179149 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:09 crc kubenswrapper[4932]: E0218 19:35:09.179217 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:09 crc kubenswrapper[4932]: E0218 19:35:09.179310 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.259342 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.259398 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.259410 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.259426 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.259460 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:09Z","lastTransitionTime":"2026-02-18T19:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.362312 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.362347 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.362357 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.362374 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.362384 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:09Z","lastTransitionTime":"2026-02-18T19:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.464594 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.464638 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.464650 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.464668 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.464682 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:09Z","lastTransitionTime":"2026-02-18T19:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.567408 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.567461 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.567473 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.567491 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.567504 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:09Z","lastTransitionTime":"2026-02-18T19:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.670609 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.670654 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.670668 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.670687 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.670700 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:09Z","lastTransitionTime":"2026-02-18T19:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.739152 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sj8bg_1b8d80e2-307e-43b6-9003-e77eef51e084/kube-multus/0.log" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.739257 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sj8bg" event={"ID":"1b8d80e2-307e-43b6-9003-e77eef51e084","Type":"ContainerStarted","Data":"3e8702ea2a3ccfe6e870f680c6626413f332d89935501738f35ce5a35d33ddda"} Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.759948 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.773618 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.773695 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.773718 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.773748 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.773771 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:09Z","lastTransitionTime":"2026-02-18T19:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.779861 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.798067 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.809398 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.822191 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.833556 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.844995 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.856092 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.877051 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.877311 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.877351 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.877360 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.877376 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.877386 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:09Z","lastTransitionTime":"2026-02-18T19:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.892644 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.911959 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.932247 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.945909 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8702ea2a3ccfe6e870f680c6626413f332d89935501738f35ce5a35d33ddda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:35:08Z\\\",\\\"message\\\":\\\"2026-02-18T19:34:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_db60d678-7f4a-4742-b311-849d4b37080b\\\\n2026-02-18T19:34:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_db60d678-7f4a-4742-b311-849d4b37080b to /host/opt/cni/bin/\\\\n2026-02-18T19:34:22Z [verbose] multus-daemon started\\\\n2026-02-18T19:34:22Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:35:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:35:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.957411 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31b1deb-52e4-4a2b-84d2-7263235a9614\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d49b33110c005074c926cb27774369c1aa68dbc56d47ed3fa29456a5b5e672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efe5587ce56ca0dce30a3e010094421a89f4f6713c04baa601f96d1d5919248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://381008993cfc5f59da6f8dc90f823fbbb1ab84e53aa86978152d00b078452802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.974890 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.979465 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.979492 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.979502 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.979518 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.979528 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:09Z","lastTransitionTime":"2026-02-18T19:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:09 crc kubenswrapper[4932]: I0218 19:35:09.996254 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:51Z\\\",\\\"message\\\":\\\"t:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230768 6595 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nI0218 19:34:51.230812 6595 services_controller.go:360] Finished syncing service networking-console-plugin on namespace openshift-network-console for network=default : 4.323485ms\\\\nI0218 19:34:51.230792 6595 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230858 6595 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-operator for network=default\\\\nF0218 19:34:51.230890 6595 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.007709 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:10Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.081796 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.082010 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.082029 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.082047 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.082060 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:10Z","lastTransitionTime":"2026-02-18T19:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.170959 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 04:23:41.708676632 +0000 UTC Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.178320 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:10 crc kubenswrapper[4932]: E0218 19:35:10.178466 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.184209 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.184245 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.184255 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.184269 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.184278 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:10Z","lastTransitionTime":"2026-02-18T19:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.286872 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.286908 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.286920 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.286937 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.286950 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:10Z","lastTransitionTime":"2026-02-18T19:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.389819 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.389880 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.389898 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.389924 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.389941 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:10Z","lastTransitionTime":"2026-02-18T19:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.493485 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.493569 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.493593 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.493622 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.493642 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:10Z","lastTransitionTime":"2026-02-18T19:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.596361 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.596439 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.596458 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.596483 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.596503 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:10Z","lastTransitionTime":"2026-02-18T19:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.700270 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.700348 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.700368 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.700397 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.700415 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:10Z","lastTransitionTime":"2026-02-18T19:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.803275 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.803367 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.803389 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.803417 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.803435 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:10Z","lastTransitionTime":"2026-02-18T19:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.907046 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.907133 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.907144 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.907167 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:10 crc kubenswrapper[4932]: I0218 19:35:10.907201 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:10Z","lastTransitionTime":"2026-02-18T19:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.010593 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.010683 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.010708 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.010738 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.010762 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:11Z","lastTransitionTime":"2026-02-18T19:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.113403 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.113458 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.113475 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.113501 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.113520 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:11Z","lastTransitionTime":"2026-02-18T19:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.171144 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 12:13:59.361954623 +0000 UTC Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.178502 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.178654 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:11 crc kubenswrapper[4932]: E0218 19:35:11.178671 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:11 crc kubenswrapper[4932]: E0218 19:35:11.178946 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.179273 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:11 crc kubenswrapper[4932]: E0218 19:35:11.179611 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.216469 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.216521 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.216531 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.216549 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.216564 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:11Z","lastTransitionTime":"2026-02-18T19:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.319678 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.319720 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.319729 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.319746 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.319756 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:11Z","lastTransitionTime":"2026-02-18T19:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.422934 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.423290 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.423433 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.423583 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.423719 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:11Z","lastTransitionTime":"2026-02-18T19:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.527246 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.528126 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.528392 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.528606 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.528800 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:11Z","lastTransitionTime":"2026-02-18T19:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.631089 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.631157 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.631215 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.631241 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.631261 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:11Z","lastTransitionTime":"2026-02-18T19:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.734007 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.734372 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.734504 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.734664 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.734786 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:11Z","lastTransitionTime":"2026-02-18T19:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.837843 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.838156 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.838358 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.838542 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.838716 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:11Z","lastTransitionTime":"2026-02-18T19:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.943842 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.943921 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.943947 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.943977 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:11 crc kubenswrapper[4932]: I0218 19:35:11.943999 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:11Z","lastTransitionTime":"2026-02-18T19:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.048056 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.048500 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.048644 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.048788 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.048960 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:12Z","lastTransitionTime":"2026-02-18T19:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.153307 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.153363 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.153382 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.153404 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.153421 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:12Z","lastTransitionTime":"2026-02-18T19:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.171945 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 20:47:46.453309589 +0000 UTC Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.178353 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:12 crc kubenswrapper[4932]: E0218 19:35:12.178680 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.257150 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.257289 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.257307 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.257331 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.257348 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:12Z","lastTransitionTime":"2026-02-18T19:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.359928 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.359978 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.359993 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.360012 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.360399 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:12Z","lastTransitionTime":"2026-02-18T19:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.464650 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.464721 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.464740 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.464764 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.464784 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:12Z","lastTransitionTime":"2026-02-18T19:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.567643 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.567704 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.567724 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.567749 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.567769 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:12Z","lastTransitionTime":"2026-02-18T19:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.670746 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.670820 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.670838 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.670868 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.670893 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:12Z","lastTransitionTime":"2026-02-18T19:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.774436 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.774517 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.774535 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.774587 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.774607 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:12Z","lastTransitionTime":"2026-02-18T19:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.877497 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.877556 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.877574 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.877597 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.877614 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:12Z","lastTransitionTime":"2026-02-18T19:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.980638 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.980702 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.980718 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.980741 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:12 crc kubenswrapper[4932]: I0218 19:35:12.980758 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:12Z","lastTransitionTime":"2026-02-18T19:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.083725 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.083787 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.083805 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.083855 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.083874 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:13Z","lastTransitionTime":"2026-02-18T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.172966 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 16:57:30.269729292 +0000 UTC Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.178655 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.178727 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:13 crc kubenswrapper[4932]: E0218 19:35:13.178893 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.178995 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:13 crc kubenswrapper[4932]: E0218 19:35:13.179455 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:13 crc kubenswrapper[4932]: E0218 19:35:13.179646 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.187633 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.187675 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.187683 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.187701 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.187716 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:13Z","lastTransitionTime":"2026-02-18T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.198344 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.291454 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.291517 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.291533 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.291556 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.291574 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:13Z","lastTransitionTime":"2026-02-18T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.394902 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.394967 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.394983 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.395005 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.395022 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:13Z","lastTransitionTime":"2026-02-18T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.497566 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.497620 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.497647 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.497675 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.497697 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:13Z","lastTransitionTime":"2026-02-18T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.600650 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.600733 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.600761 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.600790 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.600814 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:13Z","lastTransitionTime":"2026-02-18T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.703248 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.703340 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.703358 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.703380 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.703398 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:13Z","lastTransitionTime":"2026-02-18T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.763372 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.763800 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.764016 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.764252 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.764450 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:13Z","lastTransitionTime":"2026-02-18T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:13 crc kubenswrapper[4932]: E0218 19:35:13.788923 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:13Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.795261 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.795331 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.795356 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.795387 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.795409 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:13Z","lastTransitionTime":"2026-02-18T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:13 crc kubenswrapper[4932]: E0218 19:35:13.816070 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:13Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.820779 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.820983 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.821111 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.821312 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.821456 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:13Z","lastTransitionTime":"2026-02-18T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:13 crc kubenswrapper[4932]: E0218 19:35:13.841313 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:13Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.846501 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.846562 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.846579 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.846605 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.846623 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:13Z","lastTransitionTime":"2026-02-18T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:13 crc kubenswrapper[4932]: E0218 19:35:13.866549 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:13Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.871465 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.871532 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.871555 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.871583 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.871607 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:13Z","lastTransitionTime":"2026-02-18T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:13 crc kubenswrapper[4932]: E0218 19:35:13.892463 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:13Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:13 crc kubenswrapper[4932]: E0218 19:35:13.892691 4932 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.894664 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.894714 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.894731 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.894756 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.894773 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:13Z","lastTransitionTime":"2026-02-18T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.997522 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.997585 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.997603 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.997628 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:13 crc kubenswrapper[4932]: I0218 19:35:13.997648 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:13Z","lastTransitionTime":"2026-02-18T19:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.100578 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.100632 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.100650 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.100672 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.100689 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:14Z","lastTransitionTime":"2026-02-18T19:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.173353 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 22:31:42.97760435 +0000 UTC Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.178989 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:14 crc kubenswrapper[4932]: E0218 19:35:14.179229 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.204668 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.204731 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.204752 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.204773 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.204791 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:14Z","lastTransitionTime":"2026-02-18T19:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.307807 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.307848 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.307858 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.307873 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.307884 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:14Z","lastTransitionTime":"2026-02-18T19:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.411031 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.411077 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.411095 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.411119 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.411135 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:14Z","lastTransitionTime":"2026-02-18T19:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.514829 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.514906 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.514924 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.514949 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.514967 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:14Z","lastTransitionTime":"2026-02-18T19:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.617750 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.617803 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.617819 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.617843 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.617862 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:14Z","lastTransitionTime":"2026-02-18T19:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.721363 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.721425 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.721443 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.721467 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.721483 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:14Z","lastTransitionTime":"2026-02-18T19:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.824506 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.824582 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.824606 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.824640 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.824664 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:14Z","lastTransitionTime":"2026-02-18T19:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.928159 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.928244 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.928264 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.928290 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:14 crc kubenswrapper[4932]: I0218 19:35:14.928308 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:14Z","lastTransitionTime":"2026-02-18T19:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.032507 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.032583 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.032599 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.032619 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.032631 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:15Z","lastTransitionTime":"2026-02-18T19:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.135659 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.135711 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.135728 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.135750 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.135766 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:15Z","lastTransitionTime":"2026-02-18T19:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.174243 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 15:35:23.752699865 +0000 UTC Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.178696 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.178730 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.178873 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:15 crc kubenswrapper[4932]: E0218 19:35:15.179086 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:15 crc kubenswrapper[4932]: E0218 19:35:15.179300 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:15 crc kubenswrapper[4932]: E0218 19:35:15.179417 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.239066 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.239114 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.239125 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.239143 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.239156 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:15Z","lastTransitionTime":"2026-02-18T19:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.342653 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.342715 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.342738 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.342767 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.342790 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:15Z","lastTransitionTime":"2026-02-18T19:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.445757 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.445823 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.445840 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.445864 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.445881 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:15Z","lastTransitionTime":"2026-02-18T19:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.549297 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.549360 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.549380 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.549409 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.549433 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:15Z","lastTransitionTime":"2026-02-18T19:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.651947 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.652031 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.652053 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.652078 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.652096 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:15Z","lastTransitionTime":"2026-02-18T19:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.755665 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.755725 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.755744 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.755767 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.755784 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:15Z","lastTransitionTime":"2026-02-18T19:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.859107 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.859208 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.859235 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.859262 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.859284 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:15Z","lastTransitionTime":"2026-02-18T19:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.962892 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.962969 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.962990 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.963018 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:15 crc kubenswrapper[4932]: I0218 19:35:15.963040 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:15Z","lastTransitionTime":"2026-02-18T19:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.065629 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.065694 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.065711 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.065737 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.065755 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:16Z","lastTransitionTime":"2026-02-18T19:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.168926 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.168967 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.168979 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.168997 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.169009 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:16Z","lastTransitionTime":"2026-02-18T19:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.175224 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 05:05:07.642692221 +0000 UTC Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.179215 4932 scope.go:117] "RemoveContainer" containerID="2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.179495 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:16 crc kubenswrapper[4932]: E0218 19:35:16.179551 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.271815 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.271916 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.271955 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.271977 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.272016 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:16Z","lastTransitionTime":"2026-02-18T19:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.374221 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.374264 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.374278 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.374296 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.374310 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:16Z","lastTransitionTime":"2026-02-18T19:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.478050 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.478107 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.478124 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.478147 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.478164 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:16Z","lastTransitionTime":"2026-02-18T19:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.581612 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.581671 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.581689 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.581714 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.581737 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:16Z","lastTransitionTime":"2026-02-18T19:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.683997 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.684051 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.684069 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.684093 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.684110 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:16Z","lastTransitionTime":"2026-02-18T19:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.767320 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovnkube-controller/2.log" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.770066 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerStarted","Data":"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf"} Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.770557 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.786117 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.786194 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.786206 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.786223 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.786237 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:16Z","lastTransitionTime":"2026-02-18T19:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.787666 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.804330 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.821508 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8702ea2a3ccfe6e870f680c6626413f332d89935501738f35ce5a35d33ddda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:35:08Z\\\",\\\"message\\\":\\\"2026-02-18T19:34:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_db60d678-7f4a-4742-b311-849d4b37080b\\\\n2026-02-18T19:34:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_db60d678-7f4a-4742-b311-849d4b37080b to /host/opt/cni/bin/\\\\n2026-02-18T19:34:22Z [verbose] multus-daemon started\\\\n2026-02-18T19:34:22Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:35:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:35:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.839783 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31b1deb-52e4-4a2b-84d2-7263235a9614\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d49b33110c005074c926cb27774369c1aa68dbc56d47ed3fa29456a5b5e672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efe5587ce56ca0dce30a3e010094421a89f4f6713c04baa601f96d1d5919248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://381008993cfc5f59da6f8dc90f823fbbb1ab84e53aa86978152d00b078452802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.857067 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6b1ceeb-ed25-4345-a294-674238130833\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd80983bc05658f4dacedf042b5c669290255dd503bccbc9164ad48e35e7d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8509e4d576796f6d37306cabb9830536406b365ffc86a32c4e492ffd91e7d9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8509e4d576796f6d37306cabb9830536406b365ffc86a32c4e492ffd91e7d9eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.873437 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.889550 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.889607 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.889624 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.889655 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.889675 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:16Z","lastTransitionTime":"2026-02-18T19:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.899944 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:51Z\\\",\\\"message\\\":\\\"t:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230768 6595 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nI0218 19:34:51.230812 6595 services_controller.go:360] Finished syncing service networking-console-plugin on namespace openshift-network-console for network=default : 4.323485ms\\\\nI0218 19:34:51.230792 6595 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230858 6595 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-operator for network=default\\\\nF0218 19:34:51.230890 6595 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.916768 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.935059 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.952584 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.972836 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.988290 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.992736 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.992779 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.992798 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.992824 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:16 crc kubenswrapper[4932]: I0218 19:35:16.992841 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:16Z","lastTransitionTime":"2026-02-18T19:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.008588 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.026082 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.040413 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.057269 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.079605 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.095400 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.095451 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.095462 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.095479 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.095553 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:17Z","lastTransitionTime":"2026-02-18T19:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.105786 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.175673 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 12:05:14.518070468 +0000 UTC Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.179136 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.179207 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:17 crc kubenswrapper[4932]: E0218 19:35:17.179316 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:17 crc kubenswrapper[4932]: E0218 19:35:17.179503 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.179544 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:17 crc kubenswrapper[4932]: E0218 19:35:17.179627 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.196062 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31b1deb-52e4-4a2b-84d2-7263235a9614\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d49b33110c005074c926cb27774369c1aa68dbc56d47ed3fa29456a5b5e672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efe5587ce56ca0dce30a3e010094421a89f4f6713c04baa601f96d1d5919248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://381008993cfc5f59da6f8dc90f823fbbb1ab84e53aa86978152d00b078452802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.198682 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.198746 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.198791 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.198815 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.198827 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:17Z","lastTransitionTime":"2026-02-18T19:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.214543 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6b1ceeb-ed25-4345-a294-674238130833\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd80983bc05658f4dacedf042b5c669290255dd503bccbc9164ad48e35e7d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8509e4d576796f6d37306cabb9830536406b365ffc86a32c4e492ffd91e7d9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8509e4d576796f6d37306cabb9830536406b365ffc86a32c4e492ffd91e7d9eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.230885 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.300975 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.301018 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.301038 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.301054 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.301065 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:17Z","lastTransitionTime":"2026-02-18T19:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.303589 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:51Z\\\",\\\"message\\\":\\\"t:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230768 6595 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nI0218 19:34:51.230812 6595 services_controller.go:360] Finished syncing service networking-console-plugin on namespace openshift-network-console for network=default : 4.323485ms\\\\nI0218 19:34:51.230792 6595 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230858 6595 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-operator for network=default\\\\nF0218 19:34:51.230890 6595 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.315571 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.327665 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.343051 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.358423 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.368276 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.379169 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.389522 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.398845 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.403182 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.403223 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.403235 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.403250 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.403262 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:17Z","lastTransitionTime":"2026-02-18T19:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.408674 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.421680 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.433743 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.446144 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.461332 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.477566 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8702ea2a3ccfe6e870f680c6626413f332d89935501738f35ce5a35d33ddda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:35:08Z\\\",\\\"message\\\":\\\"2026-02-18T19:34:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_db60d678-7f4a-4742-b311-849d4b37080b\\\\n2026-02-18T19:34:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_db60d678-7f4a-4742-b311-849d4b37080b to /host/opt/cni/bin/\\\\n2026-02-18T19:34:22Z [verbose] multus-daemon started\\\\n2026-02-18T19:34:22Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:35:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:35:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.505724 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.505765 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.505773 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.505787 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.505796 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:17Z","lastTransitionTime":"2026-02-18T19:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.608772 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.609158 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.609214 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.609248 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.609268 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:17Z","lastTransitionTime":"2026-02-18T19:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.713549 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.713620 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.713640 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.713667 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.713686 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:17Z","lastTransitionTime":"2026-02-18T19:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.780962 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovnkube-controller/3.log" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.782654 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovnkube-controller/2.log" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.786330 4932 generic.go:334] "Generic (PLEG): container finished" podID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerID="d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf" exitCode=1 Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.786377 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerDied","Data":"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf"} Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.786423 4932 scope.go:117] "RemoveContainer" containerID="2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.788824 4932 scope.go:117] "RemoveContainer" containerID="d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf" Feb 18 19:35:17 crc kubenswrapper[4932]: E0218 19:35:17.789122 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.805625 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.815886 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.815922 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.815934 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.815955 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.815967 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:17Z","lastTransitionTime":"2026-02-18T19:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.819414 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.840131 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8702ea2a3ccfe6e870f680c6626413f332d89935501738f35ce5a35d33ddda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:35:08Z\\\",\\\"message\\\":\\\"2026-02-18T19:34:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_db60d678-7f4a-4742-b311-849d4b37080b\\\\n2026-02-18T19:34:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_db60d678-7f4a-4742-b311-849d4b37080b to /host/opt/cni/bin/\\\\n2026-02-18T19:34:22Z [verbose] multus-daemon started\\\\n2026-02-18T19:34:22Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:35:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:35:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.859709 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31b1deb-52e4-4a2b-84d2-7263235a9614\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d49b33110c005074c926cb27774369c1aa68dbc56d47ed3fa29456a5b5e672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efe5587ce56ca0dce30a3e010094421a89f4f6713c04baa601f96d1d5919248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://381008993cfc5f59da6f8dc90f823fbbb1ab84e53aa86978152d00b078452802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.877307 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6b1ceeb-ed25-4345-a294-674238130833\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd80983bc05658f4dacedf042b5c669290255dd503bccbc9164ad48e35e7d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8509e4d576796f6d37306cabb9830536406b365ffc86a32c4e492ffd91e7d9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8509e4d576796f6d37306cabb9830536406b365ffc86a32c4e492ffd91e7d9eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.896125 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.919516 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.919582 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.919595 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.919614 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.919629 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:17Z","lastTransitionTime":"2026-02-18T19:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.930934 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09df170353e55957e3c800d1812026ee565377c15dd4b29ea1c96753aa128a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:34:51Z\\\",\\\"message\\\":\\\"t:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230768 6595 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nI0218 19:34:51.230812 6595 services_controller.go:360] Finished syncing service networking-console-plugin on namespace openshift-network-console for network=default : 4.323485ms\\\\nI0218 19:34:51.230792 6595 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:34:51.230858 6595 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-operator for network=default\\\\nF0218 19:34:51.230890 6595 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:35:17Z\\\",\\\"message\\\":\\\"mers/factory.go:160\\\\nI0218 19:35:17.178478 6969 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:35:17.178611 6969 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:35:17.178682 6969 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:35:17.178776 6969 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:35:17.178928 6969 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:35:17.179579 6969 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:35:17.179590 6969 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:35:17.179618 6969 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 19:35:17.179632 6969 factory.go:656] Stopping watch factory\\\\nI0218 19:35:17.179640 6969 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:35:17.179653 6969 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:35:17.179681 6969 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.947048 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.967528 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:17 crc kubenswrapper[4932]: I0218 19:35:17.988361 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.005504 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.020999 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.023151 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.023225 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.023244 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.023269 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.023287 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:18Z","lastTransitionTime":"2026-02-18T19:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.043236 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.057892 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.073674 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.091547 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.110855 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.126622 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.126688 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.126707 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.126733 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.126750 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:18Z","lastTransitionTime":"2026-02-18T19:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.130394 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.176293 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 21:23:17.053872213 +0000 UTC Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.178689 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:18 crc kubenswrapper[4932]: E0218 19:35:18.178873 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.229883 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.229946 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.229974 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.230005 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.230030 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:18Z","lastTransitionTime":"2026-02-18T19:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.333283 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.333349 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.333368 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.333393 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.333411 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:18Z","lastTransitionTime":"2026-02-18T19:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.436169 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.436262 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.436281 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.436306 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.436324 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:18Z","lastTransitionTime":"2026-02-18T19:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.540333 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.540416 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.540438 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.540473 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.540492 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:18Z","lastTransitionTime":"2026-02-18T19:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.644447 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.644537 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.644557 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.644595 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.644618 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:18Z","lastTransitionTime":"2026-02-18T19:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.748278 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.748361 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.748382 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.748415 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.748436 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:18Z","lastTransitionTime":"2026-02-18T19:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.792986 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovnkube-controller/3.log" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.798328 4932 scope.go:117] "RemoveContainer" containerID="d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf" Feb 18 19:35:18 crc kubenswrapper[4932]: E0218 19:35:18.798665 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.822639 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.845375 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.856725 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.856892 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.856977 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.857018 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.857097 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:18Z","lastTransitionTime":"2026-02-18T19:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.872999 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8702ea2a3ccfe6e870f680c6626413f332d89935501738f35ce5a35d33ddda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:35:08Z\\\",\\\"message\\\":\\\"2026-02-18T19:34:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_db60d678-7f4a-4742-b311-849d4b37080b\\\\n2026-02-18T19:34:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_db60d678-7f4a-4742-b311-849d4b37080b to /host/opt/cni/bin/\\\\n2026-02-18T19:34:22Z [verbose] multus-daemon started\\\\n2026-02-18T19:34:22Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:35:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:35:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.894068 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31b1deb-52e4-4a2b-84d2-7263235a9614\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d49b33110c005074c926cb27774369c1aa68dbc56d47ed3fa29456a5b5e672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efe5587ce56ca0dce30a3e010094421a89f4f6713c04baa601f96d1d5919248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://381008993cfc5f59da6f8dc90f823fbbb1ab84e53aa86978152d00b078452802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.910612 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6b1ceeb-ed25-4345-a294-674238130833\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd80983bc05658f4dacedf042b5c669290255dd503bccbc9164ad48e35e7d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8509e4d576796f6d37306cabb9830536406b365ffc86a32c4e492ffd91e7d9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8509e4d576796f6d37306cabb9830536406b365ffc86a32c4e492ffd91e7d9eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.933245 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.960834 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.960892 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.960911 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.960935 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.960953 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:18Z","lastTransitionTime":"2026-02-18T19:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.966798 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:35:17Z\\\",\\\"message\\\":\\\"mers/factory.go:160\\\\nI0218 19:35:17.178478 6969 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:35:17.178611 6969 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:35:17.178682 6969 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:35:17.178776 6969 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:35:17.178928 6969 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:35:17.179579 6969 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:35:17.179590 6969 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:35:17.179618 6969 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 19:35:17.179632 6969 factory.go:656] Stopping watch factory\\\\nI0218 19:35:17.179640 6969 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:35:17.179653 6969 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:35:17.179681 6969 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:35:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:18 crc kubenswrapper[4932]: I0218 19:35:18.985427 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.007139 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:19Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.028926 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:19Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.050915 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:19Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.064407 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.064471 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.064490 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.064517 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.064579 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:19Z","lastTransitionTime":"2026-02-18T19:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.067564 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:19Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.087748 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:19Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.107002 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:19Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.123845 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:19Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.140344 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:19Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.164563 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:19Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.168592 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.168663 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.168688 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.168720 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.168744 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:19Z","lastTransitionTime":"2026-02-18T19:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.176514 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 20:10:47.539530886 +0000 UTC Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.178917 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:19 crc kubenswrapper[4932]: E0218 19:35:19.179079 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.179462 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.179585 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:19 crc kubenswrapper[4932]: E0218 19:35:19.179720 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:19 crc kubenswrapper[4932]: E0218 19:35:19.180035 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.184741 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:19Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.272089 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.272135 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.272153 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.272208 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.272225 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:19Z","lastTransitionTime":"2026-02-18T19:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.375401 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.375454 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.375472 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.375497 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.375520 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:19Z","lastTransitionTime":"2026-02-18T19:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.478718 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.478802 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.478819 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.478844 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.478862 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:19Z","lastTransitionTime":"2026-02-18T19:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.582143 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.582270 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.582298 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.582330 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.582354 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:19Z","lastTransitionTime":"2026-02-18T19:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.684899 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.684977 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.685002 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.685033 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.685058 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:19Z","lastTransitionTime":"2026-02-18T19:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.788233 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.788290 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.788305 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.788327 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.788343 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:19Z","lastTransitionTime":"2026-02-18T19:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.892282 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.892355 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.892378 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.892409 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.892433 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:19Z","lastTransitionTime":"2026-02-18T19:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.995818 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.995868 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.995883 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.995905 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:19 crc kubenswrapper[4932]: I0218 19:35:19.995919 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:19Z","lastTransitionTime":"2026-02-18T19:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.098617 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.098684 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.098701 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.098726 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.098744 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:20Z","lastTransitionTime":"2026-02-18T19:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.177219 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 15:28:58.328890301 +0000 UTC Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.178441 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:20 crc kubenswrapper[4932]: E0218 19:35:20.178608 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.202313 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.202380 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.202401 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.202427 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.202445 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:20Z","lastTransitionTime":"2026-02-18T19:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.305301 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.305368 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.305385 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.305408 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.305426 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:20Z","lastTransitionTime":"2026-02-18T19:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.408671 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.408840 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.408859 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.408887 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.408904 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:20Z","lastTransitionTime":"2026-02-18T19:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.513010 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.513168 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.513218 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.513244 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.513270 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:20Z","lastTransitionTime":"2026-02-18T19:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.616523 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.616576 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.616594 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.616617 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.616635 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:20Z","lastTransitionTime":"2026-02-18T19:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.719287 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.719356 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.719373 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.719395 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.719411 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:20Z","lastTransitionTime":"2026-02-18T19:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.821578 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.821627 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.821644 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.821666 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.821683 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:20Z","lastTransitionTime":"2026-02-18T19:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.925045 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.925108 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.925125 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.925148 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.925166 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:20Z","lastTransitionTime":"2026-02-18T19:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.971607 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.971676 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.971748 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:20 crc kubenswrapper[4932]: I0218 19:35:20.971792 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:20 crc kubenswrapper[4932]: E0218 19:35:20.971814 4932 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:35:20 crc kubenswrapper[4932]: E0218 19:35:20.971910 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.971883134 +0000 UTC m=+148.553838019 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:35:20 crc kubenswrapper[4932]: E0218 19:35:20.971937 4932 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:35:20 crc kubenswrapper[4932]: E0218 19:35:20.971966 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:35:20 crc kubenswrapper[4932]: E0218 19:35:20.971973 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:35:20 crc kubenswrapper[4932]: E0218 19:35:20.972066 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:35:20 crc kubenswrapper[4932]: E0218 19:35:20.972096 4932 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:35:20 crc kubenswrapper[4932]: E0218 19:35:20.971995 4932 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:35:20 crc kubenswrapper[4932]: E0218 19:35:20.972136 4932 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:35:20 crc kubenswrapper[4932]: E0218 19:35:20.972008 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.971987326 +0000 UTC m=+148.553942181 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:35:20 crc kubenswrapper[4932]: E0218 19:35:20.972293 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.972265423 +0000 UTC m=+148.554220458 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:35:20 crc kubenswrapper[4932]: E0218 19:35:20.972939 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.972910497 +0000 UTC m=+148.554865532 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.027962 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.028004 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.028020 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.028042 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.028059 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:21Z","lastTransitionTime":"2026-02-18T19:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.072151 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:35:21 crc kubenswrapper[4932]: E0218 19:35:21.072373 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:25.072344283 +0000 UTC m=+148.654299158 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.131330 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.131399 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.131423 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.131463 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.131486 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:21Z","lastTransitionTime":"2026-02-18T19:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.177557 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 10:46:02.279422451 +0000 UTC Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.180289 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.180334 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.180371 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:21 crc kubenswrapper[4932]: E0218 19:35:21.180480 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:21 crc kubenswrapper[4932]: E0218 19:35:21.180621 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:21 crc kubenswrapper[4932]: E0218 19:35:21.180762 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.235532 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.235605 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.235624 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.235652 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.235672 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:21Z","lastTransitionTime":"2026-02-18T19:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.338600 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.338725 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.338746 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.338771 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.338793 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:21Z","lastTransitionTime":"2026-02-18T19:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.442658 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.442746 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.442765 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.442791 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.442808 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:21Z","lastTransitionTime":"2026-02-18T19:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.544979 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.545023 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.545038 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.545060 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.545076 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:21Z","lastTransitionTime":"2026-02-18T19:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.647915 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.647954 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.647965 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.647981 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.647992 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:21Z","lastTransitionTime":"2026-02-18T19:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.750560 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.750621 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.750637 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.750659 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.750676 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:21Z","lastTransitionTime":"2026-02-18T19:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.853588 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.853650 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.853669 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.853693 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.853711 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:21Z","lastTransitionTime":"2026-02-18T19:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.956852 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.956912 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.956930 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.956953 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:21 crc kubenswrapper[4932]: I0218 19:35:21.956970 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:21Z","lastTransitionTime":"2026-02-18T19:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.059584 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.059642 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.059659 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.059683 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.059700 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:22Z","lastTransitionTime":"2026-02-18T19:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.162736 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.162817 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.162839 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.162863 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.162880 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:22Z","lastTransitionTime":"2026-02-18T19:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.178331 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:56:22.004214937 +0000 UTC Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.178527 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:22 crc kubenswrapper[4932]: E0218 19:35:22.178709 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.265470 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.265539 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.265566 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.265601 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.265627 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:22Z","lastTransitionTime":"2026-02-18T19:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.369201 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.369316 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.369351 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.369383 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.369408 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:22Z","lastTransitionTime":"2026-02-18T19:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.472619 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.472663 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.472672 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.472687 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.472696 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:22Z","lastTransitionTime":"2026-02-18T19:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.576367 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.576463 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.576489 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.576520 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.576543 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:22Z","lastTransitionTime":"2026-02-18T19:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.680987 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.681049 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.681066 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.681090 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.681108 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:22Z","lastTransitionTime":"2026-02-18T19:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.784501 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.784557 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.784578 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.784603 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.784624 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:22Z","lastTransitionTime":"2026-02-18T19:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.891987 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.892049 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.892068 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.892094 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.892112 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:22Z","lastTransitionTime":"2026-02-18T19:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.995923 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.995995 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.996014 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.996038 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:22 crc kubenswrapper[4932]: I0218 19:35:22.996056 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:22Z","lastTransitionTime":"2026-02-18T19:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.098664 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.098705 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.098715 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.098752 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.098767 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:23Z","lastTransitionTime":"2026-02-18T19:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.178467 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 21:22:06.021164955 +0000 UTC Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.178718 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.178774 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.178756 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:23 crc kubenswrapper[4932]: E0218 19:35:23.178929 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:23 crc kubenswrapper[4932]: E0218 19:35:23.179023 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:23 crc kubenswrapper[4932]: E0218 19:35:23.179133 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.200994 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.201052 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.201071 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.201095 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.201114 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:23Z","lastTransitionTime":"2026-02-18T19:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.304405 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.304469 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.304487 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.304511 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.304529 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:23Z","lastTransitionTime":"2026-02-18T19:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.407363 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.407438 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.407456 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.407485 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.407504 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:23Z","lastTransitionTime":"2026-02-18T19:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.510493 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.510533 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.510542 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.510559 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.510569 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:23Z","lastTransitionTime":"2026-02-18T19:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.613762 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.613812 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.613829 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.613851 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.613867 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:23Z","lastTransitionTime":"2026-02-18T19:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.717074 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.717141 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.717163 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.717227 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.717256 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:23Z","lastTransitionTime":"2026-02-18T19:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.820246 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.820328 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.820350 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.820391 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.820422 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:23Z","lastTransitionTime":"2026-02-18T19:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.923526 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.923674 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.923700 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.923735 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.923758 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:23Z","lastTransitionTime":"2026-02-18T19:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.938538 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.938579 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.938604 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.938628 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.938643 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:23Z","lastTransitionTime":"2026-02-18T19:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:23 crc kubenswrapper[4932]: E0218 19:35:23.959525 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:23Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.964742 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.964802 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.964819 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.964844 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.964864 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:23Z","lastTransitionTime":"2026-02-18T19:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:23 crc kubenswrapper[4932]: E0218 19:35:23.983271 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:23Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.988489 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.988560 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.988584 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.988616 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:23 crc kubenswrapper[4932]: I0218 19:35:23.988642 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:23Z","lastTransitionTime":"2026-02-18T19:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:24 crc kubenswrapper[4932]: E0218 19:35:24.008209 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.013336 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.013374 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.013390 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.013411 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.013427 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:24Z","lastTransitionTime":"2026-02-18T19:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:24 crc kubenswrapper[4932]: E0218 19:35:24.028588 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.032921 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.032962 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.032974 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.032992 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.033003 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:24Z","lastTransitionTime":"2026-02-18T19:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:24 crc kubenswrapper[4932]: E0218 19:35:24.051895 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:24 crc kubenswrapper[4932]: E0218 19:35:24.052034 4932 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.054001 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.054032 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.054043 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.054057 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.054068 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:24Z","lastTransitionTime":"2026-02-18T19:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.157409 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.157490 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.157516 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.157551 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.157576 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:24Z","lastTransitionTime":"2026-02-18T19:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.178939 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 14:43:20.077823217 +0000 UTC Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.179042 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:24 crc kubenswrapper[4932]: E0218 19:35:24.179261 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.261223 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.261307 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.261334 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.261369 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.261395 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:24Z","lastTransitionTime":"2026-02-18T19:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.363993 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.364064 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.364084 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.364109 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.364127 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:24Z","lastTransitionTime":"2026-02-18T19:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.468362 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.468761 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.468778 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.468802 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.468819 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:24Z","lastTransitionTime":"2026-02-18T19:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.572200 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.572283 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.572304 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.572331 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.572349 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:24Z","lastTransitionTime":"2026-02-18T19:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.675672 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.675732 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.675744 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.675761 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.675772 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:24Z","lastTransitionTime":"2026-02-18T19:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.779622 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.779714 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.779750 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.779779 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.779802 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:24Z","lastTransitionTime":"2026-02-18T19:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.883408 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.883469 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.883492 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.883521 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.883541 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:24Z","lastTransitionTime":"2026-02-18T19:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.986906 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.986979 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.987016 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.987050 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:24 crc kubenswrapper[4932]: I0218 19:35:24.987074 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:24Z","lastTransitionTime":"2026-02-18T19:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.090567 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.090633 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.090650 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.090675 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.090696 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:25Z","lastTransitionTime":"2026-02-18T19:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.178800 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.178903 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:25 crc kubenswrapper[4932]: E0218 19:35:25.179004 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.178830 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.179097 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 18:55:38.272900807 +0000 UTC Feb 18 19:35:25 crc kubenswrapper[4932]: E0218 19:35:25.179245 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:25 crc kubenswrapper[4932]: E0218 19:35:25.179443 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.193422 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.193464 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.193480 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.193503 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.193520 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:25Z","lastTransitionTime":"2026-02-18T19:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.297319 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.297414 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.297431 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.297460 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.297479 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:25Z","lastTransitionTime":"2026-02-18T19:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.400553 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.400647 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.400665 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.400689 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.400709 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:25Z","lastTransitionTime":"2026-02-18T19:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.503663 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.503742 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.503761 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.503785 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.503804 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:25Z","lastTransitionTime":"2026-02-18T19:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.606870 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.606945 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.606962 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.606987 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.607005 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:25Z","lastTransitionTime":"2026-02-18T19:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.710023 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.710081 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.710103 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.710126 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.710143 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:25Z","lastTransitionTime":"2026-02-18T19:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.812661 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.812703 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.812715 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.812731 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.812743 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:25Z","lastTransitionTime":"2026-02-18T19:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.915517 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.915586 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.915605 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.915632 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:25 crc kubenswrapper[4932]: I0218 19:35:25.915651 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:25Z","lastTransitionTime":"2026-02-18T19:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.018507 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.018599 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.018627 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.018657 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.018681 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:26Z","lastTransitionTime":"2026-02-18T19:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.121426 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.121460 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.121473 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.121488 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.121500 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:26Z","lastTransitionTime":"2026-02-18T19:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.178110 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:26 crc kubenswrapper[4932]: E0218 19:35:26.178324 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.180247 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 14:43:33.627133025 +0000 UTC Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.224920 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.224986 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.225003 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.225024 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.225043 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:26Z","lastTransitionTime":"2026-02-18T19:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.327882 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.327928 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.327945 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.327970 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.327986 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:26Z","lastTransitionTime":"2026-02-18T19:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.430591 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.430667 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.430684 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.430707 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.430728 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:26Z","lastTransitionTime":"2026-02-18T19:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.534011 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.534079 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.534102 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.534135 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.534155 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:26Z","lastTransitionTime":"2026-02-18T19:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.637402 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.637453 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.637469 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.637492 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.637509 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:26Z","lastTransitionTime":"2026-02-18T19:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.739714 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.739756 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.739768 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.739793 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.739818 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:26Z","lastTransitionTime":"2026-02-18T19:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.842220 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.842286 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.842303 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.842329 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.842346 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:26Z","lastTransitionTime":"2026-02-18T19:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.945811 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.945967 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.946002 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.946042 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:26 crc kubenswrapper[4932]: I0218 19:35:26.946069 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:26Z","lastTransitionTime":"2026-02-18T19:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.049352 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.049402 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.049423 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.049470 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.049495 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:27Z","lastTransitionTime":"2026-02-18T19:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.152862 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.152926 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.152951 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.152979 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.153001 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:27Z","lastTransitionTime":"2026-02-18T19:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.183581 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:27 crc kubenswrapper[4932]: E0218 19:35:27.183837 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.183901 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 11:02:38.811386878 +0000 UTC Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.184091 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:27 crc kubenswrapper[4932]: E0218 19:35:27.184311 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.184494 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:27 crc kubenswrapper[4932]: E0218 19:35:27.184702 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.202464 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31b1deb-52e4-4a2b-84d2-7263235a9614\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d49b33110c005074c926cb27774369c1aa68dbc56d47ed3fa29456a5b5e672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efe5587ce56ca0dce30a3e010094421a89f4f6713c04baa601f96d1d5919248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://381008993cfc5f59da6f8dc90f823fbbb1ab84e53aa86978152d00b078452802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.219345 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6b1ceeb-ed25-4345-a294-674238130833\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd80983bc05658f4dacedf042b5c669290255dd503bccbc9164ad48e35e7d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8509e4d576796f6d37306cabb9830536406b365ffc86a32c4e492ffd91e7d9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8509e4d576796f6d37306cabb9830536406b365ffc86a32c4e492ffd91e7d9eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.240849 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.256448 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.256506 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.256528 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.256558 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.256579 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:27Z","lastTransitionTime":"2026-02-18T19:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.272019 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:35:17Z\\\",\\\"message\\\":\\\"mers/factory.go:160\\\\nI0218 19:35:17.178478 6969 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:35:17.178611 6969 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:35:17.178682 6969 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:35:17.178776 6969 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:35:17.178928 6969 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:35:17.179579 6969 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:35:17.179590 6969 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:35:17.179618 6969 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 19:35:17.179632 6969 factory.go:656] Stopping watch factory\\\\nI0218 19:35:17.179640 6969 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:35:17.179653 6969 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:35:17.179681 6969 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:35:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.291769 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.307454 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.321208 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.336499 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.347009 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.356737 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.358948 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.359019 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.359034 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.359052 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.359068 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:27Z","lastTransitionTime":"2026-02-18T19:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.369523 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.382680 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.392269 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.403513 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.418349 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.439905 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.458421 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.462210 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.462273 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.462292 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.462317 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.462334 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:27Z","lastTransitionTime":"2026-02-18T19:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.478863 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8702ea2a3ccfe6e870f680c6626413f332d89935501738f35ce5a35d33ddda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:35:08Z\\\",\\\"message\\\":\\\"2026-02-18T19:34:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_db60d678-7f4a-4742-b311-849d4b37080b\\\\n2026-02-18T19:34:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_db60d678-7f4a-4742-b311-849d4b37080b to /host/opt/cni/bin/\\\\n2026-02-18T19:34:22Z [verbose] multus-daemon started\\\\n2026-02-18T19:34:22Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:35:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:35:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.564751 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.564815 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.564839 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.564889 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.564916 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:27Z","lastTransitionTime":"2026-02-18T19:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.667049 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.667107 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.667125 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.667149 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.667167 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:27Z","lastTransitionTime":"2026-02-18T19:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.771389 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.771441 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.771458 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.771482 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.771502 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:27Z","lastTransitionTime":"2026-02-18T19:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.874153 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.874234 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.874251 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.874273 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.874290 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:27Z","lastTransitionTime":"2026-02-18T19:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.977371 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.977423 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.977443 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.977467 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:27 crc kubenswrapper[4932]: I0218 19:35:27.977483 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:27Z","lastTransitionTime":"2026-02-18T19:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.080716 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.081585 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.081778 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.081922 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.082074 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:28Z","lastTransitionTime":"2026-02-18T19:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.178442 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:28 crc kubenswrapper[4932]: E0218 19:35:28.178799 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.184243 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 17:14:02.363350346 +0000 UTC Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.185333 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.185426 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.185444 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.185468 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.185486 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:28Z","lastTransitionTime":"2026-02-18T19:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.288461 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.288798 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.289013 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.289216 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.289376 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:28Z","lastTransitionTime":"2026-02-18T19:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.392394 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.392457 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.392473 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.392500 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.392517 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:28Z","lastTransitionTime":"2026-02-18T19:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.495958 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.496047 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.496071 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.496095 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.496112 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:28Z","lastTransitionTime":"2026-02-18T19:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.599860 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.599923 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.599941 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.599968 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.599986 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:28Z","lastTransitionTime":"2026-02-18T19:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.703004 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.703074 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.703101 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.703130 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.703152 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:28Z","lastTransitionTime":"2026-02-18T19:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.806293 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.806351 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.806374 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.806401 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.806425 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:28Z","lastTransitionTime":"2026-02-18T19:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.909786 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.909845 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.909861 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.909884 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:28 crc kubenswrapper[4932]: I0218 19:35:28.909902 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:28Z","lastTransitionTime":"2026-02-18T19:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.012975 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.013071 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.013097 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.013128 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.013154 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:29Z","lastTransitionTime":"2026-02-18T19:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.115455 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.115499 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.115516 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.115541 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.115558 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:29Z","lastTransitionTime":"2026-02-18T19:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.178822 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.178904 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:29 crc kubenswrapper[4932]: E0218 19:35:29.179007 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:29 crc kubenswrapper[4932]: E0218 19:35:29.179111 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.179276 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:29 crc kubenswrapper[4932]: E0218 19:35:29.179638 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.184456 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 12:43:06.355076513 +0000 UTC Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.218593 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.218640 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.218660 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.218684 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.218702 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:29Z","lastTransitionTime":"2026-02-18T19:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.322989 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.323063 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.323083 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.323112 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.323137 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:29Z","lastTransitionTime":"2026-02-18T19:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.427115 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.427257 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.427282 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.427318 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.427342 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:29Z","lastTransitionTime":"2026-02-18T19:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.530320 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.530562 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.530731 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.530860 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.530985 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:29Z","lastTransitionTime":"2026-02-18T19:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.634922 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.635396 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.635538 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.635665 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.635805 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:29Z","lastTransitionTime":"2026-02-18T19:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.739892 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.739973 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.739996 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.740026 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.740050 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:29Z","lastTransitionTime":"2026-02-18T19:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.843208 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.843266 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.843335 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.843361 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.843379 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:29Z","lastTransitionTime":"2026-02-18T19:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.946835 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.946897 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.946916 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.946942 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:29 crc kubenswrapper[4932]: I0218 19:35:29.946962 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:29Z","lastTransitionTime":"2026-02-18T19:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.049629 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.049704 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.049721 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.049745 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.049762 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:30Z","lastTransitionTime":"2026-02-18T19:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.153029 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.153089 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.153105 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.153144 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.153161 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:30Z","lastTransitionTime":"2026-02-18T19:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.178619 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:30 crc kubenswrapper[4932]: E0218 19:35:30.178815 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.185216 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 04:20:25.082154191 +0000 UTC Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.256665 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.256741 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.256765 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.256812 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.256837 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:30Z","lastTransitionTime":"2026-02-18T19:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.361025 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.361093 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.361115 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.361146 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.361166 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:30Z","lastTransitionTime":"2026-02-18T19:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.464465 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.464536 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.464554 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.464578 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.464598 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:30Z","lastTransitionTime":"2026-02-18T19:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.567262 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.567325 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.567343 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.567365 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.567382 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:30Z","lastTransitionTime":"2026-02-18T19:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.670255 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.670330 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.670355 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.670386 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.670405 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:30Z","lastTransitionTime":"2026-02-18T19:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.773602 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.773654 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.773671 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.773693 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.773709 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:30Z","lastTransitionTime":"2026-02-18T19:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.876911 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.876982 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.877009 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.877037 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.877058 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:30Z","lastTransitionTime":"2026-02-18T19:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.980322 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.980384 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.980400 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.980424 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:30 crc kubenswrapper[4932]: I0218 19:35:30.980442 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:30Z","lastTransitionTime":"2026-02-18T19:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.083451 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.083603 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.083627 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.083651 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.083668 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:31Z","lastTransitionTime":"2026-02-18T19:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.179380 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.179508 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:31 crc kubenswrapper[4932]: E0218 19:35:31.179581 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:31 crc kubenswrapper[4932]: E0218 19:35:31.179678 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.179768 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:31 crc kubenswrapper[4932]: E0218 19:35:31.179862 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.185355 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 01:24:39.065591145 +0000 UTC Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.186987 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.187082 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.187108 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.187136 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.187160 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:31Z","lastTransitionTime":"2026-02-18T19:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.289847 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.289893 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.289910 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.289934 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.289951 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:31Z","lastTransitionTime":"2026-02-18T19:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.393082 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.393141 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.393159 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.393219 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.393237 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:31Z","lastTransitionTime":"2026-02-18T19:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.496666 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.496718 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.496729 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.496746 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.496757 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:31Z","lastTransitionTime":"2026-02-18T19:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.599848 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.599905 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.599922 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.599944 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.599961 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:31Z","lastTransitionTime":"2026-02-18T19:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.702727 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.702786 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.702805 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.702829 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.702847 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:31Z","lastTransitionTime":"2026-02-18T19:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.806537 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.806663 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.806684 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.806708 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.806724 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:31Z","lastTransitionTime":"2026-02-18T19:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.909590 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.909644 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.909660 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.909682 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:31 crc kubenswrapper[4932]: I0218 19:35:31.909698 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:31Z","lastTransitionTime":"2026-02-18T19:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.012521 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.012584 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.012602 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.012626 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.012646 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:32Z","lastTransitionTime":"2026-02-18T19:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.115947 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.116004 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.116025 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.116054 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.116076 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:32Z","lastTransitionTime":"2026-02-18T19:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.178687 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:32 crc kubenswrapper[4932]: E0218 19:35:32.178863 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.185864 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 09:01:17.755211537 +0000 UTC Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.218537 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.218585 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.218618 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.218657 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.218681 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:32Z","lastTransitionTime":"2026-02-18T19:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.321751 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.321808 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.321822 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.321842 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.321854 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:32Z","lastTransitionTime":"2026-02-18T19:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.424445 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.424482 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.424494 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.424509 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.424519 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:32Z","lastTransitionTime":"2026-02-18T19:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.526687 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.526808 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.526838 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.526850 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.526859 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:32Z","lastTransitionTime":"2026-02-18T19:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.630040 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.630104 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.630126 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.630153 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.630245 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:32Z","lastTransitionTime":"2026-02-18T19:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.732709 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.732760 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.732773 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.732792 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.732806 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:32Z","lastTransitionTime":"2026-02-18T19:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.835481 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.835515 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.835525 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.835541 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.835552 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:32Z","lastTransitionTime":"2026-02-18T19:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.938391 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.938444 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.938460 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.938482 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:32 crc kubenswrapper[4932]: I0218 19:35:32.938498 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:32Z","lastTransitionTime":"2026-02-18T19:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.041663 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.041727 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.041743 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.041766 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.041783 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:33Z","lastTransitionTime":"2026-02-18T19:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.145091 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.145177 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.145251 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.145289 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.145309 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:33Z","lastTransitionTime":"2026-02-18T19:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.178951 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.179056 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:33 crc kubenswrapper[4932]: E0218 19:35:33.179137 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.179160 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:33 crc kubenswrapper[4932]: E0218 19:35:33.179401 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:33 crc kubenswrapper[4932]: E0218 19:35:33.180403 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.180503 4932 scope.go:117] "RemoveContainer" containerID="d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf" Feb 18 19:35:33 crc kubenswrapper[4932]: E0218 19:35:33.181413 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.186123 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 02:16:06.083773546 +0000 UTC Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.210447 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.248513 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.248576 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.248596 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.248620 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.248637 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:33Z","lastTransitionTime":"2026-02-18T19:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.352884 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.352941 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.352958 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.352981 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.353000 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:33Z","lastTransitionTime":"2026-02-18T19:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.455668 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.455723 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.455742 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.455766 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.455782 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:33Z","lastTransitionTime":"2026-02-18T19:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.558853 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.558920 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.558938 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.559062 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.559084 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:33Z","lastTransitionTime":"2026-02-18T19:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.662162 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.662366 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.662398 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.662429 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.662451 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:33Z","lastTransitionTime":"2026-02-18T19:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.765815 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.765878 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.765901 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.765930 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.765953 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:33Z","lastTransitionTime":"2026-02-18T19:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.868828 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.868900 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.868922 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.868949 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.868973 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:33Z","lastTransitionTime":"2026-02-18T19:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.972018 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.972090 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.972115 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.972146 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:33 crc kubenswrapper[4932]: I0218 19:35:33.972170 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:33Z","lastTransitionTime":"2026-02-18T19:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.077619 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.077709 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.077734 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.077765 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.077790 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:34Z","lastTransitionTime":"2026-02-18T19:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.178919 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:34 crc kubenswrapper[4932]: E0218 19:35:34.179414 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.181513 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.181586 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.181611 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.181641 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.181663 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:34Z","lastTransitionTime":"2026-02-18T19:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.186681 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 15:30:14.589192408 +0000 UTC Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.284723 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.284763 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.284775 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.284793 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.284807 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:34Z","lastTransitionTime":"2026-02-18T19:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.364338 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.364389 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.364405 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.364433 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.364470 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:34Z","lastTransitionTime":"2026-02-18T19:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:34 crc kubenswrapper[4932]: E0218 19:35:34.384803 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.390332 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.390396 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.390413 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.390436 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.390454 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:34Z","lastTransitionTime":"2026-02-18T19:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:34 crc kubenswrapper[4932]: E0218 19:35:34.412140 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.417337 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.417384 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.417416 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.417439 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.417457 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:34Z","lastTransitionTime":"2026-02-18T19:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:34 crc kubenswrapper[4932]: E0218 19:35:34.438629 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.443470 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.443525 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.443548 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.443574 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.443594 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:34Z","lastTransitionTime":"2026-02-18T19:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:34 crc kubenswrapper[4932]: E0218 19:35:34.465388 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.470721 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.470778 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.470800 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.470823 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.470841 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:34Z","lastTransitionTime":"2026-02-18T19:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:34 crc kubenswrapper[4932]: E0218 19:35:34.490917 4932 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bf5b35af-cf95-424f-9da2-9aceebb0ceec\\\",\\\"systemUUID\\\":\\\"ded33a9e-53d9-4a60-ad08-559411f62337\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:34 crc kubenswrapper[4932]: E0218 19:35:34.491220 4932 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.493449 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.493488 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.493497 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.493511 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.493524 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:34Z","lastTransitionTime":"2026-02-18T19:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.596560 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.596607 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.596619 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.596634 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.596646 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:34Z","lastTransitionTime":"2026-02-18T19:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.699791 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.699837 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.699849 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.699868 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.699880 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:34Z","lastTransitionTime":"2026-02-18T19:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.803297 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.803369 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.803390 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.803424 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.803448 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:34Z","lastTransitionTime":"2026-02-18T19:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.906452 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.906521 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.906547 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.906574 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:34 crc kubenswrapper[4932]: I0218 19:35:34.906593 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:34Z","lastTransitionTime":"2026-02-18T19:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.009561 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.009638 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.009660 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.009688 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.009709 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:35Z","lastTransitionTime":"2026-02-18T19:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.113488 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.113565 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.113588 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.113618 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.113645 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:35Z","lastTransitionTime":"2026-02-18T19:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.178691 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.178830 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.178703 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:35 crc kubenswrapper[4932]: E0218 19:35:35.178957 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:35 crc kubenswrapper[4932]: E0218 19:35:35.179047 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:35 crc kubenswrapper[4932]: E0218 19:35:35.179246 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.187604 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 02:57:56.834095176 +0000 UTC Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.217303 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.217400 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.217421 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.217444 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.217462 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:35Z","lastTransitionTime":"2026-02-18T19:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.320062 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.320123 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.320140 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.320163 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.320212 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:35Z","lastTransitionTime":"2026-02-18T19:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.422940 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.422994 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.423009 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.423033 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.423081 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:35Z","lastTransitionTime":"2026-02-18T19:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.525925 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.526003 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.526020 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.526043 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.526064 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:35Z","lastTransitionTime":"2026-02-18T19:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.629523 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.629564 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.629575 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.629591 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.629603 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:35Z","lastTransitionTime":"2026-02-18T19:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.732900 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.732943 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.732956 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.732973 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.732984 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:35Z","lastTransitionTime":"2026-02-18T19:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.836327 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.836390 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.836409 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.836432 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.836449 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:35Z","lastTransitionTime":"2026-02-18T19:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.939304 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.939348 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.939363 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.939379 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:35 crc kubenswrapper[4932]: I0218 19:35:35.939390 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:35Z","lastTransitionTime":"2026-02-18T19:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.042770 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.042829 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.042844 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.042865 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.042876 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:36Z","lastTransitionTime":"2026-02-18T19:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.146057 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.146106 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.146128 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.146158 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.146224 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:36Z","lastTransitionTime":"2026-02-18T19:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.179085 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:36 crc kubenswrapper[4932]: E0218 19:35:36.179274 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.188241 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 09:33:01.55861455 +0000 UTC Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.249324 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.249379 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.249395 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.249423 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.249446 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:36Z","lastTransitionTime":"2026-02-18T19:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.352557 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.352616 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.352633 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.352658 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.352674 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:36Z","lastTransitionTime":"2026-02-18T19:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.455771 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.455822 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.455834 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.455858 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.455883 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:36Z","lastTransitionTime":"2026-02-18T19:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.558262 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.558316 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.558334 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.558358 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.558374 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:36Z","lastTransitionTime":"2026-02-18T19:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.661607 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.661681 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.661703 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.661739 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.661762 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:36Z","lastTransitionTime":"2026-02-18T19:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.765321 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.765413 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.765433 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.765458 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.765476 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:36Z","lastTransitionTime":"2026-02-18T19:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.868327 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.868391 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.868408 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.868432 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.868450 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:36Z","lastTransitionTime":"2026-02-18T19:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.971741 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.971813 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.971831 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.971859 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:36 crc kubenswrapper[4932]: I0218 19:35:36.971877 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:36Z","lastTransitionTime":"2026-02-18T19:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.075553 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.075618 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.075640 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.075671 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.075691 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:37Z","lastTransitionTime":"2026-02-18T19:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.178203 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:37 crc kubenswrapper[4932]: E0218 19:35:37.178389 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.178217 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:37 crc kubenswrapper[4932]: E0218 19:35:37.178771 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.179046 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.179120 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.179138 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.179522 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.179741 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:37Z","lastTransitionTime":"2026-02-18T19:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.180337 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:37 crc kubenswrapper[4932]: E0218 19:35:37.180686 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.189634 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 01:07:59.219936142 +0000 UTC Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.206058 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.237963 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21e3c087-c564-4f66-a656-c92a4e47fa72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:35:17Z\\\",\\\"message\\\":\\\"mers/factory.go:160\\\\nI0218 19:35:17.178478 6969 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:35:17.178611 6969 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:35:17.178682 6969 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:35:17.178776 6969 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:35:17.178928 6969 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:35:17.179579 6969 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:35:17.179590 6969 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:35:17.179618 6969 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 19:35:17.179632 6969 factory.go:656] Stopping watch factory\\\\nI0218 19:35:17.179640 6969 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:35:17.179653 6969 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:35:17.179681 6969 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:35:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xnfjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hbqb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.253245 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2r9kj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:35Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kdjbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.267993 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31b1deb-52e4-4a2b-84d2-7263235a9614\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3d49b33110c005074c926cb27774369c1aa68dbc56d47ed3fa29456a5b5e672\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efe5587ce56ca0dce30a3e010094421a89f4f6713c04baa601f96d1d5919248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://381008993cfc5f59da6f8dc90f823fbbb1ab84e53aa86978152d00b078452802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://434b1e267bd9ff2262a058bb1477f39a0a26d4b76c46aec970d9d683c14f61f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.285883 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6b1ceeb-ed25-4345-a294-674238130833\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd80983bc05658f4dacedf042b5c669290255dd503bccbc9164ad48e35e7d052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8509e4d576796f6d37306cabb9830536406b365ffc86a32c4e492ffd91e7d9eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8509e4d576796f6d37306cabb9830536406b365ffc86a32c4e492ffd91e7d9eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.286218 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.286293 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.286317 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.286353 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.286377 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:37Z","lastTransitionTime":"2026-02-18T19:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.305720 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00afd8c891cd6ef5299f1e763a399157810a3d6be8d6088fac95321d3b42a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.324739 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6368285db3546a8f06c05958fbd6a4eff38a70f2a83bb62a54a679b93976af64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f3f6eb44616c845ff422f2fe351d9789babb6486d4d435baee8086e4a3dbc2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.337527 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bz9kj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4495ae98-57db-4409-87a7-56192683cc00\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9464d202ec9b4f4bfffcb9028e72bc236f38341f366ea30cbf2154d0e4ecd2ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wb9jl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bz9kj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.362831 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcce3828-1fe2-412c-85ca-8f2823938570\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23ec1738bcf1d2dd647db0d373af934b11154dff53044ff834fe7257b32f17d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfac026c12a8bf498c2bb79250930f207d0064ffd0edbef2e5e24cfa93a62971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13d6e02f2655abbd491fd87d24365a4cd72db2765eb2c05c5553febfa7be962a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb79fa7b7288b35bb4bcd652d79107019527c1171639893fef92b89d26303412\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6cb57625b5582e558b55c361284729fc2052214bf528e7458937568887515e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa20d1549113cb3aad03ce60838085f0ad49599a6fb652c6818377b9baee6edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa20d1549113cb3aad03ce60838085f0ad49599a6fb652c6818377b9baee6edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676f220fbabbdf1f764a0ac9856dcfc9d8f6543b96228f378b3bd30c8ab34986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676f220fbabbdf1f764a0ac9856dcfc9d8f6543b96228f378b3bd30c8ab34986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://05c608056602e82f1d72f327241bccbf4b1a4f33f9e8512cbf3c44689c7e7ec0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05c608056602e82f1d72f327241bccbf4b1a4f33f9e8512cbf3c44689c7e7ec0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.375752 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.386906 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jmmxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45a22d6d-69dc-4c93-acd4-188dc6d1e315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73944342362d8717377f28bd401c556a923c5fef326a035c658a2c1106286155\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jmmxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.391138 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.391283 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.391305 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.391522 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.391539 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:37Z","lastTransitionTime":"2026-02-18T19:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.404686 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2740774-23d5-4857-9ac6-f0a01e38a64c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://942baabd54dd4153148e6ad568e59d4539d13fd4a4a6789bb16b6f21c595a849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9r7v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf9v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.425667 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e77eb8d5-cd29-49ef-9080-4cb12d3afa09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7217c94d36e4a0bb0279788870d98a77dea7e769b63c19015c3feaf2c7dd0db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed96274bec701b6aa385c6bbf0d15056bd6960c57136f50a18b64c9cebb7e06\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017996ac5e47001d4e33938d2ed91393149431ec701b79c072ce41244aa42c78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600610d5743a4edb356835d9269c4b883ba0401c91022b7a300f61684b3ec699\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d7215a74ca79f793eb98fa72c0c5bc019e0f6bc37de112273294b78d28c218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa37ea5111671959c82f070b7eeded02ad8832990f570fc0a8cfd7daa4ccbb8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c2a3c7ba245c56e9823afe6c1fe709ac589de8fae401083d10cfc646aabb1bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:34:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7nqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.438882 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64edee2c-efed-415d-8d8e-362edad7c5bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594483c121e009e698b7290074b767a4a20464b5d37055d5435840a03f196979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ba330b854b9161594cde885e7bfe0490d9d4125da0d045b227c7ba1617a1a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8llv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.454110 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09eb6bd5-8022-44b5-aadc-6d5b5af4c94f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498fbcf6033592847b9e3dfbe4d5b0129f81981613cb1e980e7a45a88a35810e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bf167203a6640dd3f2e074eaa5dd51c61cdcb63e0cf550ce070d11b0ed66c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60b6804c89fcf0be7872ef704cd2e0147bee9313707ff084c3b6ff5f0f2d3b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.472841 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cddaa075862e8a1f87d4b962e7b4e3b05ded16f555cd6d0fdb8b8293d7e6b2ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.491473 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sj8bg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b8d80e2-307e-43b6-9003-e77eef51e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e8702ea2a3ccfe6e870f680c6626413f332d89935501738f35ce5a35d33ddda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:35:08Z\\\",\\\"message\\\":\\\"2026-02-18T19:34:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_db60d678-7f4a-4742-b311-849d4b37080b\\\\n2026-02-18T19:34:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_db60d678-7f4a-4742-b311-849d4b37080b to /host/opt/cni/bin/\\\\n2026-02-18T19:34:22Z [verbose] multus-daemon started\\\\n2026-02-18T19:34:22Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:35:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:35:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp7ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:34:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sj8bg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.493502 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.493591 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.493610 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.493669 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.493860 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:37Z","lastTransitionTime":"2026-02-18T19:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.510406 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34f6a85c-e66d-4dd7-a145-95674593cba0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:34:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 19:34:11.097360 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:34:11.100490 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3859777000/tls.crt::/tmp/serving-cert-3859777000/tls.key\\\\\\\"\\\\nI0218 19:34:16.400084 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:34:16.407735 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:34:16.407777 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:34:16.407822 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:34:16.407837 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:34:16.419800 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 19:34:16.419835 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 19:34:16.419888 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419897 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:34:16.419902 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:34:16.419905 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:34:16.419910 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:34:16.419913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 19:34:16.421164 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:34:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:33:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:33:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:33:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.527752 4932 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:34:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:35:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.596374 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.596411 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.596420 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.596433 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.596443 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:37Z","lastTransitionTime":"2026-02-18T19:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.699426 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.699464 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.699477 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.699492 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.699505 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:37Z","lastTransitionTime":"2026-02-18T19:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.801490 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.801544 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.801563 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.801587 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.801605 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:37Z","lastTransitionTime":"2026-02-18T19:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.904334 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.904409 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.904432 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.904467 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:37 crc kubenswrapper[4932]: I0218 19:35:37.904491 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:37Z","lastTransitionTime":"2026-02-18T19:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.010908 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.010986 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.011023 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.011073 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.011098 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:38Z","lastTransitionTime":"2026-02-18T19:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.113626 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.113679 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.113697 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.113722 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.113739 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:38Z","lastTransitionTime":"2026-02-18T19:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.178729 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:38 crc kubenswrapper[4932]: E0218 19:35:38.179307 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.190428 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 14:41:52.620619613 +0000 UTC Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.216940 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.217009 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.217030 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.217074 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.217097 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:38Z","lastTransitionTime":"2026-02-18T19:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.321229 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.321295 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.321315 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.321345 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.321366 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:38Z","lastTransitionTime":"2026-02-18T19:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.424240 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.424296 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.424314 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.424391 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.424409 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:38Z","lastTransitionTime":"2026-02-18T19:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.527407 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.527449 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.527459 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.527476 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.527489 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:38Z","lastTransitionTime":"2026-02-18T19:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.630687 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.630763 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.630785 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.630817 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.630839 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:38Z","lastTransitionTime":"2026-02-18T19:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.734236 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.734280 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.734290 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.734305 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.734316 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:38Z","lastTransitionTime":"2026-02-18T19:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.836960 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.837022 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.837040 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.837068 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.837086 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:38Z","lastTransitionTime":"2026-02-18T19:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.939841 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.939909 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.939931 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.939960 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:38 crc kubenswrapper[4932]: I0218 19:35:38.939979 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:38Z","lastTransitionTime":"2026-02-18T19:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.041959 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.042014 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.042030 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.042054 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.042071 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:39Z","lastTransitionTime":"2026-02-18T19:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.145016 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.145071 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.145088 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.145110 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.145209 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:39Z","lastTransitionTime":"2026-02-18T19:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.178165 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.178339 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.178340 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs\") pod \"network-metrics-daemon-kdjbt\" (UID: \"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\") " pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:39 crc kubenswrapper[4932]: E0218 19:35:39.178493 4932 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:35:39 crc kubenswrapper[4932]: E0218 19:35:39.178547 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.178590 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:39 crc kubenswrapper[4932]: E0218 19:35:39.178691 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:39 crc kubenswrapper[4932]: E0218 19:35:39.178777 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:39 crc kubenswrapper[4932]: E0218 19:35:39.179339 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs podName:1d73072e-7e9b-4ae7-92ca-5950da33ed6c nodeName:}" failed. No retries permitted until 2026-02-18 19:36:43.179070369 +0000 UTC m=+166.761025244 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs") pod "network-metrics-daemon-kdjbt" (UID: "1d73072e-7e9b-4ae7-92ca-5950da33ed6c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.190605 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 02:23:21.46863921 +0000 UTC Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.247413 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.247481 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.247503 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.247535 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.247558 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:39Z","lastTransitionTime":"2026-02-18T19:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.351537 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.351589 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.351612 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.351641 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.351665 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:39Z","lastTransitionTime":"2026-02-18T19:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.454011 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.454073 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.454091 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.454117 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.454141 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:39Z","lastTransitionTime":"2026-02-18T19:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.556633 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.556715 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.556740 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.556768 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.556786 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:39Z","lastTransitionTime":"2026-02-18T19:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.659389 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.659461 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.659483 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.659511 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.659532 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:39Z","lastTransitionTime":"2026-02-18T19:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.762803 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.762845 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.762859 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.762877 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.762889 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:39Z","lastTransitionTime":"2026-02-18T19:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.866331 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.866395 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.866423 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.866454 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.866477 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:39Z","lastTransitionTime":"2026-02-18T19:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.968673 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.968740 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.968759 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.968784 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:39 crc kubenswrapper[4932]: I0218 19:35:39.968800 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:39Z","lastTransitionTime":"2026-02-18T19:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.071003 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.071082 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.071103 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.071131 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.071153 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:40Z","lastTransitionTime":"2026-02-18T19:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.173445 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.173491 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.173502 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.173517 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.173528 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:40Z","lastTransitionTime":"2026-02-18T19:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.178983 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:40 crc kubenswrapper[4932]: E0218 19:35:40.179155 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.191421 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 10:22:25.067472898 +0000 UTC Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.276126 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.276193 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.276207 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.276226 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.276238 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:40Z","lastTransitionTime":"2026-02-18T19:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.378713 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.378775 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.378792 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.378815 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.378833 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:40Z","lastTransitionTime":"2026-02-18T19:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.481677 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.481726 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.481742 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.481764 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.481783 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:40Z","lastTransitionTime":"2026-02-18T19:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.585000 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.585075 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.585085 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.585103 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.585115 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:40Z","lastTransitionTime":"2026-02-18T19:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.688359 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.688424 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.688443 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.688467 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.688483 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:40Z","lastTransitionTime":"2026-02-18T19:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.791863 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.791938 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.791958 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.791988 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.792009 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:40Z","lastTransitionTime":"2026-02-18T19:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.895003 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.895059 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.895075 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.895097 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.895114 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:40Z","lastTransitionTime":"2026-02-18T19:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.997869 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.997928 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.997944 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.997968 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:40 crc kubenswrapper[4932]: I0218 19:35:40.997985 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:40Z","lastTransitionTime":"2026-02-18T19:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.100142 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.100222 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.100240 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.100263 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.100280 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:41Z","lastTransitionTime":"2026-02-18T19:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.178260 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.178534 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:41 crc kubenswrapper[4932]: E0218 19:35:41.178530 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.178927 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:41 crc kubenswrapper[4932]: E0218 19:35:41.179010 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:41 crc kubenswrapper[4932]: E0218 19:35:41.179128 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.191684 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 08:53:52.217091186 +0000 UTC Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.203511 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.203569 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.203584 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.203603 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.203614 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:41Z","lastTransitionTime":"2026-02-18T19:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.305866 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.305920 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.305930 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.305948 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.305960 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:41Z","lastTransitionTime":"2026-02-18T19:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.409382 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.409457 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.409476 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.409503 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.409522 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:41Z","lastTransitionTime":"2026-02-18T19:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.513203 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.513249 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.513260 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.513279 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.513292 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:41Z","lastTransitionTime":"2026-02-18T19:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.615938 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.615993 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.616007 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.616027 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.616042 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:41Z","lastTransitionTime":"2026-02-18T19:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.718735 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.718779 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.718816 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.718833 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.718846 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:41Z","lastTransitionTime":"2026-02-18T19:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.821724 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.821763 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.821773 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.821790 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.821803 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:41Z","lastTransitionTime":"2026-02-18T19:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.924673 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.924742 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.924766 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.924798 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:41 crc kubenswrapper[4932]: I0218 19:35:41.924820 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:41Z","lastTransitionTime":"2026-02-18T19:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.027382 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.027447 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.027470 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.027499 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.027520 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:42Z","lastTransitionTime":"2026-02-18T19:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.130672 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.130734 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.130756 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.130785 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.130807 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:42Z","lastTransitionTime":"2026-02-18T19:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.179048 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:42 crc kubenswrapper[4932]: E0218 19:35:42.179317 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.192227 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 06:01:49.515726723 +0000 UTC Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.233851 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.233917 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.233940 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.233969 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.233994 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:42Z","lastTransitionTime":"2026-02-18T19:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.337013 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.337075 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.337091 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.337114 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.337135 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:42Z","lastTransitionTime":"2026-02-18T19:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.439964 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.440033 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.440056 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.440084 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.440112 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:42Z","lastTransitionTime":"2026-02-18T19:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.543544 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.543614 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.543637 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.543664 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.543687 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:42Z","lastTransitionTime":"2026-02-18T19:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.646532 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.646580 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.646590 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.646618 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.646628 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:42Z","lastTransitionTime":"2026-02-18T19:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.748559 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.748609 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.748626 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.748647 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.748665 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:42Z","lastTransitionTime":"2026-02-18T19:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.852309 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.852407 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.852445 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.852475 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.852499 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:42Z","lastTransitionTime":"2026-02-18T19:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.954249 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.954288 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.954299 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.954315 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:42 crc kubenswrapper[4932]: I0218 19:35:42.954326 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:42Z","lastTransitionTime":"2026-02-18T19:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.057268 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.057324 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.057343 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.057366 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.057383 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:43Z","lastTransitionTime":"2026-02-18T19:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.160163 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.160558 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.160707 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.160843 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.160963 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:43Z","lastTransitionTime":"2026-02-18T19:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.178954 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.178988 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.179080 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:43 crc kubenswrapper[4932]: E0218 19:35:43.179741 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:43 crc kubenswrapper[4932]: E0218 19:35:43.179844 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:43 crc kubenswrapper[4932]: E0218 19:35:43.179955 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.192594 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 13:14:31.591868248 +0000 UTC Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.264323 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.264388 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.264407 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.264433 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.264452 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:43Z","lastTransitionTime":"2026-02-18T19:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.367596 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.367635 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.367646 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.367663 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.367673 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:43Z","lastTransitionTime":"2026-02-18T19:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.469935 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.469972 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.469979 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.469994 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.470004 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:43Z","lastTransitionTime":"2026-02-18T19:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.572224 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.572265 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.572273 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.572287 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.572297 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:43Z","lastTransitionTime":"2026-02-18T19:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.675667 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.675717 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.675733 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.675756 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.675771 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:43Z","lastTransitionTime":"2026-02-18T19:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.778802 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.778874 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.778892 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.778914 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.778935 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:43Z","lastTransitionTime":"2026-02-18T19:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.880988 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.881072 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.881097 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.881553 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.881875 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:43Z","lastTransitionTime":"2026-02-18T19:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.984244 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.984276 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.984287 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.984301 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:43 crc kubenswrapper[4932]: I0218 19:35:43.984311 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:43Z","lastTransitionTime":"2026-02-18T19:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.086945 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.086994 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.087006 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.087031 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.087045 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:44Z","lastTransitionTime":"2026-02-18T19:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.178448 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:44 crc kubenswrapper[4932]: E0218 19:35:44.178611 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.189087 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.189129 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.189141 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.189157 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.189185 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:44Z","lastTransitionTime":"2026-02-18T19:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.193657 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 12:41:17.243690407 +0000 UTC Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.291586 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.291625 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.291633 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.291648 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.291659 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:44Z","lastTransitionTime":"2026-02-18T19:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.393246 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.393284 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.393296 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.393308 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.393318 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:44Z","lastTransitionTime":"2026-02-18T19:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.496166 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.496243 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.496257 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.496300 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.496316 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:44Z","lastTransitionTime":"2026-02-18T19:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.599808 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.599877 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.599905 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.599930 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.599949 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:44Z","lastTransitionTime":"2026-02-18T19:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.704076 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.704129 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.704141 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.704161 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.704191 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:44Z","lastTransitionTime":"2026-02-18T19:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.807069 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.807477 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.807631 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.807826 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.807980 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:44Z","lastTransitionTime":"2026-02-18T19:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.866620 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.867157 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.867414 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.867626 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.867821 4932 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:35:44Z","lastTransitionTime":"2026-02-18T19:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.933890 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp"] Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.934460 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.939168 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.939540 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.939749 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.943087 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.981569 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=84.981545378 podStartE2EDuration="1m24.981545378s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:44.981502967 +0000 UTC m=+108.563457902" watchObservedRunningTime="2026-02-18 19:35:44.981545378 +0000 UTC m=+108.563500233" Feb 18 19:35:44 crc kubenswrapper[4932]: I0218 19:35:44.981736 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfpj" podStartSLOduration=84.981731772 podStartE2EDuration="1m24.981731772s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:44.961600893 +0000 UTC m=+108.543555768" watchObservedRunningTime="2026-02-18 19:35:44.981731772 +0000 UTC m=+108.563686617" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.013239 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-jmmxw" podStartSLOduration=86.013213974 podStartE2EDuration="1m26.013213974s" podCreationTimestamp="2026-02-18 19:34:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:45.013094171 +0000 UTC m=+108.595049076" watchObservedRunningTime="2026-02-18 19:35:45.013213974 +0000 UTC m=+108.595168869" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.045459 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e98127f1-d583-4f3a-bb5b-efd0b4d6b367-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qlwhp\" (UID: \"e98127f1-d583-4f3a-bb5b-efd0b4d6b367\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.045617 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e98127f1-d583-4f3a-bb5b-efd0b4d6b367-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qlwhp\" (UID: \"e98127f1-d583-4f3a-bb5b-efd0b4d6b367\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.045665 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e98127f1-d583-4f3a-bb5b-efd0b4d6b367-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qlwhp\" (UID: \"e98127f1-d583-4f3a-bb5b-efd0b4d6b367\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.045870 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e98127f1-d583-4f3a-bb5b-efd0b4d6b367-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qlwhp\" (UID: \"e98127f1-d583-4f3a-bb5b-efd0b4d6b367\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.045959 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e98127f1-d583-4f3a-bb5b-efd0b4d6b367-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qlwhp\" (UID: \"e98127f1-d583-4f3a-bb5b-efd0b4d6b367\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.059701 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podStartSLOduration=86.05967953 podStartE2EDuration="1m26.05967953s" podCreationTimestamp="2026-02-18 19:34:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:45.029434875 +0000 UTC m=+108.611389770" watchObservedRunningTime="2026-02-18 19:35:45.05967953 +0000 UTC m=+108.641634415" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.060253 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-z7nqj" podStartSLOduration=85.060241322 podStartE2EDuration="1m25.060241322s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:45.059909685 +0000 UTC m=+108.641864560" watchObservedRunningTime="2026-02-18 19:35:45.060241322 +0000 UTC m=+108.642196207" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.079327 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=88.079297067 podStartE2EDuration="1m28.079297067s" podCreationTimestamp="2026-02-18 19:34:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:45.079114333 +0000 UTC m=+108.661069218" watchObservedRunningTime="2026-02-18 19:35:45.079297067 +0000 UTC m=+108.661251942" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.121562 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-sj8bg" podStartSLOduration=85.121529518 podStartE2EDuration="1m25.121529518s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:45.121033967 +0000 UTC m=+108.702988852" watchObservedRunningTime="2026-02-18 19:35:45.121529518 +0000 UTC m=+108.703484413" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.147133 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e98127f1-d583-4f3a-bb5b-efd0b4d6b367-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qlwhp\" (UID: \"e98127f1-d583-4f3a-bb5b-efd0b4d6b367\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.147241 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e98127f1-d583-4f3a-bb5b-efd0b4d6b367-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qlwhp\" (UID: \"e98127f1-d583-4f3a-bb5b-efd0b4d6b367\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.147260 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e98127f1-d583-4f3a-bb5b-efd0b4d6b367-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qlwhp\" (UID: \"e98127f1-d583-4f3a-bb5b-efd0b4d6b367\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.147276 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e98127f1-d583-4f3a-bb5b-efd0b4d6b367-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qlwhp\" (UID: \"e98127f1-d583-4f3a-bb5b-efd0b4d6b367\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.147296 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e98127f1-d583-4f3a-bb5b-efd0b4d6b367-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qlwhp\" (UID: \"e98127f1-d583-4f3a-bb5b-efd0b4d6b367\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.147321 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e98127f1-d583-4f3a-bb5b-efd0b4d6b367-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qlwhp\" (UID: \"e98127f1-d583-4f3a-bb5b-efd0b4d6b367\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.147364 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e98127f1-d583-4f3a-bb5b-efd0b4d6b367-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qlwhp\" (UID: \"e98127f1-d583-4f3a-bb5b-efd0b4d6b367\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.149665 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e98127f1-d583-4f3a-bb5b-efd0b4d6b367-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qlwhp\" (UID: \"e98127f1-d583-4f3a-bb5b-efd0b4d6b367\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.153525 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e98127f1-d583-4f3a-bb5b-efd0b4d6b367-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qlwhp\" (UID: \"e98127f1-d583-4f3a-bb5b-efd0b4d6b367\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.167851 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=58.16783527 podStartE2EDuration="58.16783527s" podCreationTimestamp="2026-02-18 19:34:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:45.15346549 +0000 UTC m=+108.735420375" watchObservedRunningTime="2026-02-18 19:35:45.16783527 +0000 UTC m=+108.749790125" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.177409 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e98127f1-d583-4f3a-bb5b-efd0b4d6b367-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qlwhp\" (UID: \"e98127f1-d583-4f3a-bb5b-efd0b4d6b367\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.179148 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.179204 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.179434 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:45 crc kubenswrapper[4932]: E0218 19:35:45.179615 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:45 crc kubenswrapper[4932]: E0218 19:35:45.179741 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:45 crc kubenswrapper[4932]: E0218 19:35:45.179814 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.185686 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=32.185672348 podStartE2EDuration="32.185672348s" podCreationTimestamp="2026-02-18 19:35:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:45.168801882 +0000 UTC m=+108.750756727" watchObservedRunningTime="2026-02-18 19:35:45.185672348 +0000 UTC m=+108.767627203" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.194848 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 17:52:04.88344988 +0000 UTC Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.195593 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.201886 4932 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.260035 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.266565 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=12.266541691 podStartE2EDuration="12.266541691s" podCreationTimestamp="2026-02-18 19:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:45.263070353 +0000 UTC m=+108.845025238" watchObservedRunningTime="2026-02-18 19:35:45.266541691 +0000 UTC m=+108.848496566" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.889153 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" event={"ID":"e98127f1-d583-4f3a-bb5b-efd0b4d6b367","Type":"ContainerStarted","Data":"32c042b4c6ab5823a2643c599e7abae1d79bc6409dcebdafa2661577d133b350"} Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.889528 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" event={"ID":"e98127f1-d583-4f3a-bb5b-efd0b4d6b367","Type":"ContainerStarted","Data":"980dfca20ba2887bc027bdc5ecf95018bdbab1469583ed04a2db15b6eeef5b93"} Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.908265 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-bz9kj" podStartSLOduration=86.908238495 podStartE2EDuration="1m26.908238495s" podCreationTimestamp="2026-02-18 19:34:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:45.3243692 +0000 UTC m=+108.906324045" watchObservedRunningTime="2026-02-18 19:35:45.908238495 +0000 UTC m=+109.490193350" Feb 18 19:35:45 crc kubenswrapper[4932]: I0218 19:35:45.908877 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlwhp" podStartSLOduration=85.908864619 podStartE2EDuration="1m25.908864619s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:45.907943228 +0000 UTC m=+109.489898113" watchObservedRunningTime="2026-02-18 19:35:45.908864619 +0000 UTC m=+109.490819474" Feb 18 19:35:46 crc kubenswrapper[4932]: I0218 19:35:46.179007 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:46 crc kubenswrapper[4932]: E0218 19:35:46.179164 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:47 crc kubenswrapper[4932]: I0218 19:35:47.314907 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:47 crc kubenswrapper[4932]: I0218 19:35:47.314927 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:47 crc kubenswrapper[4932]: E0218 19:35:47.315857 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:47 crc kubenswrapper[4932]: I0218 19:35:47.315891 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:47 crc kubenswrapper[4932]: E0218 19:35:47.315984 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:47 crc kubenswrapper[4932]: E0218 19:35:47.316056 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:48 crc kubenswrapper[4932]: I0218 19:35:48.178212 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:48 crc kubenswrapper[4932]: E0218 19:35:48.178502 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:48 crc kubenswrapper[4932]: I0218 19:35:48.179374 4932 scope.go:117] "RemoveContainer" containerID="d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf" Feb 18 19:35:48 crc kubenswrapper[4932]: E0218 19:35:48.179600 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hbqb5_openshift-ovn-kubernetes(21e3c087-c564-4f66-a656-c92a4e47fa72)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" Feb 18 19:35:49 crc kubenswrapper[4932]: I0218 19:35:49.178432 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:49 crc kubenswrapper[4932]: I0218 19:35:49.178547 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:49 crc kubenswrapper[4932]: E0218 19:35:49.178712 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:49 crc kubenswrapper[4932]: I0218 19:35:49.178981 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:49 crc kubenswrapper[4932]: E0218 19:35:49.179082 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:49 crc kubenswrapper[4932]: E0218 19:35:49.179323 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:50 crc kubenswrapper[4932]: I0218 19:35:50.179106 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:50 crc kubenswrapper[4932]: E0218 19:35:50.179324 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:51 crc kubenswrapper[4932]: I0218 19:35:51.178795 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:51 crc kubenswrapper[4932]: I0218 19:35:51.178821 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:51 crc kubenswrapper[4932]: I0218 19:35:51.178848 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:51 crc kubenswrapper[4932]: E0218 19:35:51.178911 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:51 crc kubenswrapper[4932]: E0218 19:35:51.179050 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:51 crc kubenswrapper[4932]: E0218 19:35:51.179089 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:52 crc kubenswrapper[4932]: I0218 19:35:52.178301 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:52 crc kubenswrapper[4932]: E0218 19:35:52.178446 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:53 crc kubenswrapper[4932]: I0218 19:35:53.179247 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:53 crc kubenswrapper[4932]: I0218 19:35:53.179316 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:53 crc kubenswrapper[4932]: I0218 19:35:53.179271 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:53 crc kubenswrapper[4932]: E0218 19:35:53.179426 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:53 crc kubenswrapper[4932]: E0218 19:35:53.179523 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:53 crc kubenswrapper[4932]: E0218 19:35:53.179623 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:54 crc kubenswrapper[4932]: I0218 19:35:54.178902 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:54 crc kubenswrapper[4932]: E0218 19:35:54.179122 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:54 crc kubenswrapper[4932]: I0218 19:35:54.926646 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sj8bg_1b8d80e2-307e-43b6-9003-e77eef51e084/kube-multus/1.log" Feb 18 19:35:54 crc kubenswrapper[4932]: I0218 19:35:54.927379 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sj8bg_1b8d80e2-307e-43b6-9003-e77eef51e084/kube-multus/0.log" Feb 18 19:35:54 crc kubenswrapper[4932]: I0218 19:35:54.927455 4932 generic.go:334] "Generic (PLEG): container finished" podID="1b8d80e2-307e-43b6-9003-e77eef51e084" containerID="3e8702ea2a3ccfe6e870f680c6626413f332d89935501738f35ce5a35d33ddda" exitCode=1 Feb 18 19:35:54 crc kubenswrapper[4932]: I0218 19:35:54.927527 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sj8bg" event={"ID":"1b8d80e2-307e-43b6-9003-e77eef51e084","Type":"ContainerDied","Data":"3e8702ea2a3ccfe6e870f680c6626413f332d89935501738f35ce5a35d33ddda"} Feb 18 19:35:54 crc kubenswrapper[4932]: I0218 19:35:54.927598 4932 scope.go:117] "RemoveContainer" containerID="e2bcceb96b7f973ad77e3aea93f2a9612ac75639bd72e8206e9617cf679f39f7" Feb 18 19:35:54 crc kubenswrapper[4932]: I0218 19:35:54.928347 4932 scope.go:117] "RemoveContainer" containerID="3e8702ea2a3ccfe6e870f680c6626413f332d89935501738f35ce5a35d33ddda" Feb 18 19:35:54 crc kubenswrapper[4932]: E0218 19:35:54.928643 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-sj8bg_openshift-multus(1b8d80e2-307e-43b6-9003-e77eef51e084)\"" pod="openshift-multus/multus-sj8bg" podUID="1b8d80e2-307e-43b6-9003-e77eef51e084" Feb 18 19:35:55 crc kubenswrapper[4932]: I0218 19:35:55.179342 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:55 crc kubenswrapper[4932]: I0218 19:35:55.179356 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:55 crc kubenswrapper[4932]: E0218 19:35:55.179541 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:55 crc kubenswrapper[4932]: E0218 19:35:55.179685 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:55 crc kubenswrapper[4932]: I0218 19:35:55.180044 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:55 crc kubenswrapper[4932]: E0218 19:35:55.180432 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:55 crc kubenswrapper[4932]: I0218 19:35:55.932966 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sj8bg_1b8d80e2-307e-43b6-9003-e77eef51e084/kube-multus/1.log" Feb 18 19:35:56 crc kubenswrapper[4932]: I0218 19:35:56.178804 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:56 crc kubenswrapper[4932]: E0218 19:35:56.179039 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:57 crc kubenswrapper[4932]: E0218 19:35:57.130123 4932 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 18 19:35:57 crc kubenswrapper[4932]: I0218 19:35:57.179130 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:57 crc kubenswrapper[4932]: I0218 19:35:57.179275 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:57 crc kubenswrapper[4932]: E0218 19:35:57.181990 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:57 crc kubenswrapper[4932]: I0218 19:35:57.182021 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:57 crc kubenswrapper[4932]: E0218 19:35:57.182244 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:57 crc kubenswrapper[4932]: E0218 19:35:57.182414 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:35:57 crc kubenswrapper[4932]: E0218 19:35:57.317024 4932 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 19:35:58 crc kubenswrapper[4932]: I0218 19:35:58.178247 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:35:58 crc kubenswrapper[4932]: E0218 19:35:58.178412 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:35:59 crc kubenswrapper[4932]: I0218 19:35:59.178933 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:35:59 crc kubenswrapper[4932]: I0218 19:35:59.179051 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:35:59 crc kubenswrapper[4932]: E0218 19:35:59.179096 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:35:59 crc kubenswrapper[4932]: I0218 19:35:59.179135 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:35:59 crc kubenswrapper[4932]: E0218 19:35:59.179240 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:35:59 crc kubenswrapper[4932]: E0218 19:35:59.179395 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:36:00 crc kubenswrapper[4932]: I0218 19:36:00.179103 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:36:00 crc kubenswrapper[4932]: E0218 19:36:00.179346 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:36:01 crc kubenswrapper[4932]: I0218 19:36:01.180101 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:36:01 crc kubenswrapper[4932]: I0218 19:36:01.180250 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:36:01 crc kubenswrapper[4932]: I0218 19:36:01.180112 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:36:01 crc kubenswrapper[4932]: E0218 19:36:01.180372 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:36:01 crc kubenswrapper[4932]: E0218 19:36:01.180514 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:36:01 crc kubenswrapper[4932]: E0218 19:36:01.180648 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:36:02 crc kubenswrapper[4932]: I0218 19:36:02.178293 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:36:02 crc kubenswrapper[4932]: E0218 19:36:02.178517 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:36:02 crc kubenswrapper[4932]: E0218 19:36:02.318917 4932 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 19:36:03 crc kubenswrapper[4932]: I0218 19:36:03.178290 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:36:03 crc kubenswrapper[4932]: E0218 19:36:03.178465 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:36:03 crc kubenswrapper[4932]: I0218 19:36:03.178807 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:36:03 crc kubenswrapper[4932]: I0218 19:36:03.178962 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:36:03 crc kubenswrapper[4932]: E0218 19:36:03.179091 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:36:03 crc kubenswrapper[4932]: I0218 19:36:03.179218 4932 scope.go:117] "RemoveContainer" containerID="d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf" Feb 18 19:36:03 crc kubenswrapper[4932]: E0218 19:36:03.179312 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:36:03 crc kubenswrapper[4932]: I0218 19:36:03.962806 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovnkube-controller/3.log" Feb 18 19:36:03 crc kubenswrapper[4932]: I0218 19:36:03.965102 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerStarted","Data":"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4"} Feb 18 19:36:03 crc kubenswrapper[4932]: I0218 19:36:03.965551 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:36:04 crc kubenswrapper[4932]: I0218 19:36:04.152380 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podStartSLOduration=104.152354653 podStartE2EDuration="1m44.152354653s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:03.99162092 +0000 UTC m=+127.573575765" watchObservedRunningTime="2026-02-18 19:36:04.152354653 +0000 UTC m=+127.734309518" Feb 18 19:36:04 crc kubenswrapper[4932]: I0218 19:36:04.153718 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kdjbt"] Feb 18 19:36:04 crc kubenswrapper[4932]: I0218 19:36:04.153810 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:36:04 crc kubenswrapper[4932]: E0218 19:36:04.153913 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:36:04 crc kubenswrapper[4932]: I0218 19:36:04.178387 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:36:04 crc kubenswrapper[4932]: E0218 19:36:04.178510 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:36:05 crc kubenswrapper[4932]: I0218 19:36:05.179394 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:36:05 crc kubenswrapper[4932]: E0218 19:36:05.179526 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:36:05 crc kubenswrapper[4932]: I0218 19:36:05.179587 4932 scope.go:117] "RemoveContainer" containerID="3e8702ea2a3ccfe6e870f680c6626413f332d89935501738f35ce5a35d33ddda" Feb 18 19:36:05 crc kubenswrapper[4932]: I0218 19:36:05.179678 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:36:05 crc kubenswrapper[4932]: I0218 19:36:05.179701 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:36:05 crc kubenswrapper[4932]: E0218 19:36:05.179915 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:36:05 crc kubenswrapper[4932]: E0218 19:36:05.180035 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:36:05 crc kubenswrapper[4932]: I0218 19:36:05.975545 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sj8bg_1b8d80e2-307e-43b6-9003-e77eef51e084/kube-multus/1.log" Feb 18 19:36:05 crc kubenswrapper[4932]: I0218 19:36:05.975968 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sj8bg" event={"ID":"1b8d80e2-307e-43b6-9003-e77eef51e084","Type":"ContainerStarted","Data":"abaae01c3d1488753c134b713c5ac61b4207745b6a2dc1624d7639c5e6d2387b"} Feb 18 19:36:06 crc kubenswrapper[4932]: I0218 19:36:06.178999 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:36:06 crc kubenswrapper[4932]: E0218 19:36:06.179141 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:36:07 crc kubenswrapper[4932]: I0218 19:36:07.178613 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:36:07 crc kubenswrapper[4932]: I0218 19:36:07.178613 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:36:07 crc kubenswrapper[4932]: E0218 19:36:07.181492 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kdjbt" podUID="1d73072e-7e9b-4ae7-92ca-5950da33ed6c" Feb 18 19:36:07 crc kubenswrapper[4932]: I0218 19:36:07.181561 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:36:07 crc kubenswrapper[4932]: E0218 19:36:07.181684 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:36:07 crc kubenswrapper[4932]: E0218 19:36:07.181798 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:36:08 crc kubenswrapper[4932]: I0218 19:36:08.178824 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:36:08 crc kubenswrapper[4932]: I0218 19:36:08.181993 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 18 19:36:08 crc kubenswrapper[4932]: I0218 19:36:08.182234 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 18 19:36:09 crc kubenswrapper[4932]: I0218 19:36:09.178252 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:36:09 crc kubenswrapper[4932]: I0218 19:36:09.178299 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:36:09 crc kubenswrapper[4932]: I0218 19:36:09.178734 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:36:09 crc kubenswrapper[4932]: I0218 19:36:09.181718 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 18 19:36:09 crc kubenswrapper[4932]: I0218 19:36:09.181791 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 18 19:36:09 crc kubenswrapper[4932]: I0218 19:36:09.181968 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 18 19:36:09 crc kubenswrapper[4932]: I0218 19:36:09.184096 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.430220 4932 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.477573 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-cn2nc"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.478406 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-cn2nc" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.481966 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.483398 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.484235 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.485075 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.485475 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: W0218 19:36:15.485980 4932 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4": failed to list *v1.Secret: secrets "machine-approver-sa-dockercfg-nl2j4" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Feb 18 19:36:15 crc kubenswrapper[4932]: E0218 19:36:15.486443 4932 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-nl2j4\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-approver-sa-dockercfg-nl2j4\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:36:15 crc kubenswrapper[4932]: W0218 19:36:15.486250 4932 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Feb 18 19:36:15 crc kubenswrapper[4932]: E0218 19:36:15.486868 4932 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:36:15 crc kubenswrapper[4932]: W0218 19:36:15.487222 4932 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-tls": failed to list *v1.Secret: secrets "machine-approver-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Feb 18 19:36:15 crc kubenswrapper[4932]: E0218 19:36:15.487269 4932 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-approver-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.487310 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gkgsj"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.487758 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:15 crc kubenswrapper[4932]: W0218 19:36:15.489091 4932 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-config": failed to list *v1.ConfigMap: configmaps "machine-approver-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Feb 18 19:36:15 crc kubenswrapper[4932]: E0218 19:36:15.489146 4932 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"machine-approver-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:36:15 crc kubenswrapper[4932]: W0218 19:36:15.489529 4932 reflector.go:561] object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Feb 18 19:36:15 crc kubenswrapper[4932]: E0218 19:36:15.489582 4932 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:36:15 crc kubenswrapper[4932]: W0218 19:36:15.489773 4932 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Feb 18 19:36:15 crc kubenswrapper[4932]: E0218 19:36:15.489928 4932 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:36:15 crc kubenswrapper[4932]: W0218 19:36:15.490097 4932 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: secrets "openshift-controller-manager-sa-dockercfg-msq4c" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 18 19:36:15 crc kubenswrapper[4932]: E0218 19:36:15.490158 4932 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-controller-manager-sa-dockercfg-msq4c\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:36:15 crc kubenswrapper[4932]: W0218 19:36:15.492372 4932 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: configmaps "openshift-global-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 18 19:36:15 crc kubenswrapper[4932]: E0218 19:36:15.492431 4932 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-global-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:36:15 crc kubenswrapper[4932]: W0218 19:36:15.492535 4932 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 18 19:36:15 crc kubenswrapper[4932]: E0218 19:36:15.492567 4932 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:36:15 crc kubenswrapper[4932]: W0218 19:36:15.492661 4932 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 18 19:36:15 crc kubenswrapper[4932]: E0218 19:36:15.492690 4932 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:36:15 crc kubenswrapper[4932]: W0218 19:36:15.492776 4932 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 18 19:36:15 crc kubenswrapper[4932]: E0218 19:36:15.492808 4932 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:36:15 crc kubenswrapper[4932]: W0218 19:36:15.493794 4932 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 18 19:36:15 crc kubenswrapper[4932]: E0218 19:36:15.493835 4932 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:36:15 crc kubenswrapper[4932]: W0218 19:36:15.493952 4932 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 18 19:36:15 crc kubenswrapper[4932]: E0218 19:36:15.493986 4932 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.500024 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-fgjll"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.501105 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.504053 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xnxl9"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.504789 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.511100 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.511395 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.511445 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.511623 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.511707 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.511774 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.512162 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jr49c"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.512685 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.513064 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.514550 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.518591 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qgjzj"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.518800 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.519108 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qgjzj" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.519200 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.518813 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.520066 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.520237 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.520264 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.520371 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.520441 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.523003 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-f874p"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.523630 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-f874p" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.524637 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-g2qvz"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.525234 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.525683 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.526359 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pj7mv"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.527033 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.528705 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gk6st"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.529282 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gk6st" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.533257 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.534212 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.538342 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.538727 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.538438 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.538983 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.538510 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.539231 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.539994 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.539246 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.539288 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.539299 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.540818 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.541197 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.541361 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.541507 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.541643 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.541823 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.541959 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.542105 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.542293 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.542437 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.543147 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.543576 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.543864 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.543970 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.544079 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.544262 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.544562 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.544603 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.544682 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.545204 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.545740 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.545890 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.545990 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.546099 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.546228 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.546317 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.546438 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hphc8"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.546494 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.546650 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.546762 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.546856 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.546957 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.547070 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.547158 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.547221 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.547940 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.548282 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.548366 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.548412 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.548510 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.557512 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.573474 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.574396 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.578287 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bldhq"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.579316 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bldhq" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.580781 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.581410 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.581637 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wlcbj"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.581913 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.582636 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.584882 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.585403 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.585578 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.585681 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.585708 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.586081 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.586196 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.586247 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.586336 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.586369 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.586456 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.587382 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.587973 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.588150 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.588673 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.588791 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.589246 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.591148 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-cn2nc"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.593082 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.594081 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.596014 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.597294 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.600456 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.611252 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.612838 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n288z"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.613348 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n288z" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.623874 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vqskh"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.627575 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-8xrbm"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.632515 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ng7nn"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.635010 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.635327 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-vqskh" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.635015 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.636154 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bch48"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.636378 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ng7nn" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.636867 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.637152 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.637487 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.639285 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.639707 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.640146 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.640475 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.640576 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.641399 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-t7f9j"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.641699 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-t7f9j" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.643864 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nzrr6"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.644367 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-nzrr6" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.645347 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.645659 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.646137 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-z8hql"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.649370 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z8hql" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.650487 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9q42v"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.651150 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9q42v" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.652790 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5c79p"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.653565 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.655947 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-845v8"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.656644 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-845v8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.657075 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xt96f"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.657442 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xt96f" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.658706 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxmcb"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.659725 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxmcb" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.660614 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.662199 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-jx49r"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.662584 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-jx49r" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.663891 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fgjll"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.664974 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.665693 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.666874 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.667990 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfmpj"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.668474 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfmpj" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.669303 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hphc8"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.670373 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.671728 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pj7mv"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.672851 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-g2qvz"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.674223 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gkgsj"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.676127 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bch48"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.676438 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.678319 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.682249 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09b62af0-116d-4918-a691-e7040fd7dc22-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-gk6st\" (UID: \"09b62af0-116d-4918-a691-e7040fd7dc22\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gk6st" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.682279 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-etcd-ca\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.682397 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-serving-cert\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.682444 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-etcd-client\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.682463 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09b62af0-116d-4918-a691-e7040fd7dc22-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-gk6st\" (UID: \"09b62af0-116d-4918-a691-e7040fd7dc22\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gk6st" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.682535 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l5kp\" (UniqueName: \"kubernetes.io/projected/d75d91b3-7800-4645-b272-768f9d02f81b-kube-api-access-6l5kp\") pod \"downloads-7954f5f757-cn2nc\" (UID: \"d75d91b3-7800-4645-b272-768f9d02f81b\") " pod="openshift-console/downloads-7954f5f757-cn2nc" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.682573 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chxzb\" (UniqueName: \"kubernetes.io/projected/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-kube-api-access-chxzb\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.682604 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdt6s\" (UniqueName: \"kubernetes.io/projected/09b62af0-116d-4918-a691-e7040fd7dc22-kube-api-access-sdt6s\") pod \"openshift-controller-manager-operator-756b6f6bc6-gk6st\" (UID: \"09b62af0-116d-4918-a691-e7040fd7dc22\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gk6st" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.682630 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-config\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.682643 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-etcd-service-ca\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.686939 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.687103 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n288z"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.689268 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jr49c"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.690491 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.691678 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-nqdfv"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.693660 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.693739 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-nqdfv" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.695501 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bldhq"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.700626 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-t7f9j"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.700951 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.702896 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.705605 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xnxl9"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.709203 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vqskh"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.713388 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.714672 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gk6st"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.716248 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-f874p"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.720250 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nzrr6"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.722391 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.725375 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wlcbj"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.726227 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ng7nn"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.727604 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.729434 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qgjzj"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.730627 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-z8hql"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.732106 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-nqdfv"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.732901 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-rmh4d"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.733456 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rmh4d" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.734276 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-jsz8m"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.734935 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jsz8m" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.735111 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9q42v"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.736814 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5c79p"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.737713 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-jx49r"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.738736 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfmpj"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.741154 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xt96f"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.741414 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.742672 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxmcb"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.744135 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-845v8"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.745159 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rmh4d"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.746445 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dpln6"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.747306 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.747801 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dpln6"] Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.760747 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.780832 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.782987 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdt6s\" (UniqueName: \"kubernetes.io/projected/09b62af0-116d-4918-a691-e7040fd7dc22-kube-api-access-sdt6s\") pod \"openshift-controller-manager-operator-756b6f6bc6-gk6st\" (UID: \"09b62af0-116d-4918-a691-e7040fd7dc22\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gk6st" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.783031 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-config\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.783049 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-etcd-service-ca\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.783070 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-etcd-ca\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.783088 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09b62af0-116d-4918-a691-e7040fd7dc22-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-gk6st\" (UID: \"09b62af0-116d-4918-a691-e7040fd7dc22\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gk6st" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.783112 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09b62af0-116d-4918-a691-e7040fd7dc22-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-gk6st\" (UID: \"09b62af0-116d-4918-a691-e7040fd7dc22\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gk6st" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.783129 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-serving-cert\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.783143 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-etcd-client\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.783189 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l5kp\" (UniqueName: \"kubernetes.io/projected/d75d91b3-7800-4645-b272-768f9d02f81b-kube-api-access-6l5kp\") pod \"downloads-7954f5f757-cn2nc\" (UID: \"d75d91b3-7800-4645-b272-768f9d02f81b\") " pod="openshift-console/downloads-7954f5f757-cn2nc" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.783217 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chxzb\" (UniqueName: \"kubernetes.io/projected/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-kube-api-access-chxzb\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.783906 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09b62af0-116d-4918-a691-e7040fd7dc22-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-gk6st\" (UID: \"09b62af0-116d-4918-a691-e7040fd7dc22\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gk6st" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.784093 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-etcd-ca\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.784127 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-config\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.784837 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-etcd-service-ca\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.788348 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-serving-cert\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.788388 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-etcd-client\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.788717 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09b62af0-116d-4918-a691-e7040fd7dc22-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-gk6st\" (UID: \"09b62af0-116d-4918-a691-e7040fd7dc22\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gk6st" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.800702 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.820540 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.880542 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.900328 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.920765 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.941669 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.960849 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 18 19:36:15 crc kubenswrapper[4932]: I0218 19:36:15.981002 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.000196 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.021389 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.041775 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.061991 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.081407 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.102605 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.121250 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.142433 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.161529 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.182111 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.201732 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.221919 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.241834 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.261878 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.281565 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.312968 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.321311 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.341988 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.361949 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.383007 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.401232 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.420946 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.441401 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.462078 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.482129 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.501167 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.521957 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.542256 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.561954 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.582099 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.601608 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.622039 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.642030 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.660108 4932 request.go:700] Waited for 1.015405723s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ac-dockercfg-9lkdf&limit=500&resourceVersion=0 Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.662111 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.681246 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.701258 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.722216 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.741793 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.761076 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.780909 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.801380 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.821528 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.841013 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.860902 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.881161 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.902244 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.931796 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.941378 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.962251 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 18 19:36:16 crc kubenswrapper[4932]: I0218 19:36:16.981612 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.001903 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.020858 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.041217 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.061970 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.081004 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.101428 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.121124 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.141764 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.162146 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.180392 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.201305 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.221433 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.241625 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.261728 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.280622 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.301938 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.321798 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.341529 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.361654 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.381625 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.401073 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.421041 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.441903 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.461231 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.480788 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.501891 4932 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.521533 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.562801 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdt6s\" (UniqueName: \"kubernetes.io/projected/09b62af0-116d-4918-a691-e7040fd7dc22-kube-api-access-sdt6s\") pod \"openshift-controller-manager-operator-756b6f6bc6-gk6st\" (UID: \"09b62af0-116d-4918-a691-e7040fd7dc22\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gk6st" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.589845 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chxzb\" (UniqueName: \"kubernetes.io/projected/8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e-kube-api-access-chxzb\") pod \"etcd-operator-b45778765-hphc8\" (UID: \"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.607800 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28fd23a7-1b44-440f-be4a-8c236cf8902b-serving-cert\") pod \"route-controller-manager-6576b87f9c-cnq5q\" (UID: \"28fd23a7-1b44-440f-be4a-8c236cf8902b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.608053 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d939dc03-30d6-4839-abd8-1d8d1bbf8cad-trusted-ca\") pod \"console-operator-58897d9998-f874p\" (UID: \"d939dc03-30d6-4839-abd8-1d8d1bbf8cad\") " pod="openshift-console-operator/console-operator-58897d9998-f874p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.608163 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.608292 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-etcd-serving-ca\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.608392 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d939dc03-30d6-4839-abd8-1d8d1bbf8cad-config\") pod \"console-operator-58897d9998-f874p\" (UID: \"d939dc03-30d6-4839-abd8-1d8d1bbf8cad\") " pod="openshift-console-operator/console-operator-58897d9998-f874p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.608494 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h82qp\" (UniqueName: \"kubernetes.io/projected/d939dc03-30d6-4839-abd8-1d8d1bbf8cad-kube-api-access-h82qp\") pod \"console-operator-58897d9998-f874p\" (UID: \"d939dc03-30d6-4839-abd8-1d8d1bbf8cad\") " pod="openshift-console-operator/console-operator-58897d9998-f874p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.608603 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/26869f13-c7ee-411c-85a1-72338142184c-encryption-config\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.608710 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-serving-cert\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.608814 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28fd23a7-1b44-440f-be4a-8c236cf8902b-config\") pod \"route-controller-manager-6576b87f9c-cnq5q\" (UID: \"28fd23a7-1b44-440f-be4a-8c236cf8902b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.608899 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3f42d0c9-6a6b-42c2-8caf-87afbe45c75b-auth-proxy-config\") pod \"machine-approver-56656f9798-sjnpq\" (UID: \"3f42d0c9-6a6b-42c2-8caf-87afbe45c75b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609001 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9a7e80fe-b260-461e-a11b-633a14eb304d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gvnf8\" (UID: \"9a7e80fe-b260-461e-a11b-633a14eb304d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609160 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvqqv\" (UniqueName: \"kubernetes.io/projected/c7dff6ec-6703-40fb-a94a-c1d8b4641703-kube-api-access-bvqqv\") pod \"openshift-apiserver-operator-796bbdcf4f-qgjzj\" (UID: \"c7dff6ec-6703-40fb-a94a-c1d8b4641703\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qgjzj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609266 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-service-ca\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609298 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26869f13-c7ee-411c-85a1-72338142184c-serving-cert\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609332 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3f42d0c9-6a6b-42c2-8caf-87afbe45c75b-machine-approver-tls\") pod \"machine-approver-56656f9798-sjnpq\" (UID: \"3f42d0c9-6a6b-42c2-8caf-87afbe45c75b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609364 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mg62\" (UniqueName: \"kubernetes.io/projected/18e44919-11c5-4974-9c71-ff803e668247-kube-api-access-7mg62\") pod \"controller-manager-879f6c89f-gkgsj\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609392 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-audit-dir\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609425 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609456 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/26869f13-c7ee-411c-85a1-72338142184c-etcd-client\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609485 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-g2qvz\" (UID: \"fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609514 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-trusted-ca\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609542 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-audit-policies\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609571 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609614 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609645 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609676 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-config\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609703 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/26869f13-c7ee-411c-85a1-72338142184c-audit-dir\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609731 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/47777b7a-7599-4366-8e0f-a2ddf382e6ef-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-6xmms\" (UID: \"47777b7a-7599-4366-8e0f-a2ddf382e6ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609763 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609791 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-audit\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609820 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a7e80fe-b260-461e-a11b-633a14eb304d-serving-cert\") pod \"openshift-config-operator-7777fb866f-gvnf8\" (UID: \"9a7e80fe-b260-461e-a11b-633a14eb304d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609847 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/26869f13-c7ee-411c-85a1-72338142184c-audit-policies\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609876 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc-config\") pod \"machine-api-operator-5694c8668f-g2qvz\" (UID: \"fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609908 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxwvv\" (UniqueName: \"kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-kube-api-access-kxwvv\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609936 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28fd23a7-1b44-440f-be4a-8c236cf8902b-client-ca\") pod \"route-controller-manager-6576b87f9c-cnq5q\" (UID: \"28fd23a7-1b44-440f-be4a-8c236cf8902b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609965 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-registry-tls\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.609995 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-gkgsj\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.610024 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.610053 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.610084 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-client-ca\") pod \"controller-manager-879f6c89f-gkgsj\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.610163 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18e44919-11c5-4974-9c71-ff803e668247-serving-cert\") pod \"controller-manager-879f6c89f-gkgsj\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.610274 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.610478 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-etcd-client\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.610644 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfbq9\" (UniqueName: \"kubernetes.io/projected/3f42d0c9-6a6b-42c2-8caf-87afbe45c75b-kube-api-access-bfbq9\") pod \"machine-approver-56656f9798-sjnpq\" (UID: \"3f42d0c9-6a6b-42c2-8caf-87afbe45c75b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.610714 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a-config\") pod \"authentication-operator-69f744f599-pj7mv\" (UID: \"b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.610797 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-trusted-ca-bundle\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: E0218 19:36:17.610842 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:18.110814507 +0000 UTC m=+141.692769492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.610952 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx42h\" (UniqueName: \"kubernetes.io/projected/f9f46b79-f300-42de-a2c3-a35670822a3b-kube-api-access-mx42h\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611033 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-registry-certificates\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611070 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611104 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-console-config\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611143 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-config\") pod \"controller-manager-879f6c89f-gkgsj\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611198 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611236 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f446d\" (UniqueName: \"kubernetes.io/projected/fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc-kube-api-access-f446d\") pod \"machine-api-operator-5694c8668f-g2qvz\" (UID: \"fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611275 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d939dc03-30d6-4839-abd8-1d8d1bbf8cad-serving-cert\") pod \"console-operator-58897d9998-f874p\" (UID: \"d939dc03-30d6-4839-abd8-1d8d1bbf8cad\") " pod="openshift-console-operator/console-operator-58897d9998-f874p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611308 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc-images\") pod \"machine-api-operator-5694c8668f-g2qvz\" (UID: \"fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611372 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26869f13-c7ee-411c-85a1-72338142184c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611403 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm4fr\" (UniqueName: \"kubernetes.io/projected/26869f13-c7ee-411c-85a1-72338142184c-kube-api-access-gm4fr\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611444 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/47777b7a-7599-4366-8e0f-a2ddf382e6ef-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-6xmms\" (UID: \"47777b7a-7599-4366-8e0f-a2ddf382e6ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611475 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pj7mv\" (UID: \"b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611539 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611578 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f42d0c9-6a6b-42c2-8caf-87afbe45c75b-config\") pod \"machine-approver-56656f9798-sjnpq\" (UID: \"3f42d0c9-6a6b-42c2-8caf-87afbe45c75b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611607 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a-serving-cert\") pod \"authentication-operator-69f744f599-pj7mv\" (UID: \"b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611637 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4tqj\" (UniqueName: \"kubernetes.io/projected/b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a-kube-api-access-t4tqj\") pod \"authentication-operator-69f744f599-pj7mv\" (UID: \"b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611697 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7dff6ec-6703-40fb-a94a-c1d8b4641703-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qgjzj\" (UID: \"c7dff6ec-6703-40fb-a94a-c1d8b4641703\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qgjzj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611764 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-bound-sa-token\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611809 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-279w9\" (UniqueName: \"kubernetes.io/projected/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-kube-api-access-279w9\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611852 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-oauth-serving-cert\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.611904 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-encryption-config\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.612042 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-audit-dir\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.612099 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c5ca023-fc82-4365-b2f9-f57220013a9f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bldhq\" (UID: \"1c5ca023-fc82-4365-b2f9-f57220013a9f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bldhq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.612232 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f9f46b79-f300-42de-a2c3-a35670822a3b-console-serving-cert\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.612290 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxl8c\" (UniqueName: \"kubernetes.io/projected/1c5ca023-fc82-4365-b2f9-f57220013a9f-kube-api-access-qxl8c\") pod \"cluster-samples-operator-665b6dd947-bldhq\" (UID: \"1c5ca023-fc82-4365-b2f9-f57220013a9f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bldhq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.612353 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7dff6ec-6703-40fb-a94a-c1d8b4641703-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qgjzj\" (UID: \"c7dff6ec-6703-40fb-a94a-c1d8b4641703\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qgjzj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.612430 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmwdb\" (UniqueName: \"kubernetes.io/projected/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-kube-api-access-kmwdb\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.612537 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-node-pullsecrets\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.612573 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a-service-ca-bundle\") pod \"authentication-operator-69f744f599-pj7mv\" (UID: \"b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.612786 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.612874 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2lqf\" (UniqueName: \"kubernetes.io/projected/28fd23a7-1b44-440f-be4a-8c236cf8902b-kube-api-access-b2lqf\") pod \"route-controller-manager-6576b87f9c-cnq5q\" (UID: \"28fd23a7-1b44-440f-be4a-8c236cf8902b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.612957 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62pvb\" (UniqueName: \"kubernetes.io/projected/9a7e80fe-b260-461e-a11b-633a14eb304d-kube-api-access-62pvb\") pod \"openshift-config-operator-7777fb866f-gvnf8\" (UID: \"9a7e80fe-b260-461e-a11b-633a14eb304d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.613059 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.613152 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/26869f13-c7ee-411c-85a1-72338142184c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.613264 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f9f46b79-f300-42de-a2c3-a35670822a3b-console-oauth-config\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.613313 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-image-import-ca\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.613367 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/47777b7a-7599-4366-8e0f-a2ddf382e6ef-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-6xmms\" (UID: \"47777b7a-7599-4366-8e0f-a2ddf382e6ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.613411 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l6wf\" (UniqueName: \"kubernetes.io/projected/47777b7a-7599-4366-8e0f-a2ddf382e6ef-kube-api-access-2l6wf\") pod \"cluster-image-registry-operator-dc59b4c8b-6xmms\" (UID: \"47777b7a-7599-4366-8e0f-a2ddf382e6ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.613456 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.615626 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l5kp\" (UniqueName: \"kubernetes.io/projected/d75d91b3-7800-4645-b272-768f9d02f81b-kube-api-access-6l5kp\") pod \"downloads-7954f5f757-cn2nc\" (UID: \"d75d91b3-7800-4645-b272-768f9d02f81b\") " pod="openshift-console/downloads-7954f5f757-cn2nc" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.620688 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-cn2nc" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.641534 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.673789 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.679362 4932 request.go:700] Waited for 1.359654297s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0 Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.681487 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.702396 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.714082 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.714217 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/74a8d999-1731-4a72-8ca8-25913744a8e7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ng7nn\" (UID: \"74a8d999-1731-4a72-8ca8-25913744a8e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ng7nn" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.714246 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.714264 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aa8e769a-613b-40f2-9d07-b034d7871302-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nzrr6\" (UID: \"aa8e769a-613b-40f2-9d07-b034d7871302\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nzrr6" Feb 18 19:36:17 crc kubenswrapper[4932]: E0218 19:36:17.714297 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:18.214268673 +0000 UTC m=+141.796223558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.714339 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/581a9ff6-cf7b-4bac-bd81-41c6fb080f36-certs\") pod \"machine-config-server-jsz8m\" (UID: \"581a9ff6-cf7b-4bac-bd81-41c6fb080f36\") " pod="openshift-machine-config-operator/machine-config-server-jsz8m" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.714381 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bd39f7e2-211c-4104-a72d-5374a6e95ee1-srv-cert\") pod \"catalog-operator-68c6474976-pkfx8\" (UID: \"bd39f7e2-211c-4104-a72d-5374a6e95ee1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.714423 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-trusted-ca-bundle\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.714459 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vhpt\" (UniqueName: \"kubernetes.io/projected/bac9c1de-1cfe-48d3-aafc-ddb41647c661-kube-api-access-8vhpt\") pod \"ingress-canary-rmh4d\" (UID: \"bac9c1de-1cfe-48d3-aafc-ddb41647c661\") " pod="openshift-ingress-canary/ingress-canary-rmh4d" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.714494 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.714526 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prh8n\" (UniqueName: \"kubernetes.io/projected/93bf45fc-6447-479a-83d0-c9418ecb8270-kube-api-access-prh8n\") pod \"service-ca-operator-777779d784-t7f9j\" (UID: \"93bf45fc-6447-479a-83d0-c9418ecb8270\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t7f9j" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.714597 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.714682 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f446d\" (UniqueName: \"kubernetes.io/projected/fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc-kube-api-access-f446d\") pod \"machine-api-operator-5694c8668f-g2qvz\" (UID: \"fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.715016 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6fc8d511-a907-4f74-9a1c-e262d684b6a5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-zfbtq\" (UID: \"6fc8d511-a907-4f74-9a1c-e262d684b6a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.715101 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkzpp\" (UniqueName: \"kubernetes.io/projected/df2da7f7-2427-4099-ba40-855a7e850256-kube-api-access-xkzpp\") pod \"service-ca-9c57cc56f-jx49r\" (UID: \"df2da7f7-2427-4099-ba40-855a7e850256\") " pod="openshift-service-ca/service-ca-9c57cc56f-jx49r" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.715235 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26869f13-c7ee-411c-85a1-72338142184c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.715284 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/522d227a-c827-415e-9e8b-e5907ba83363-service-ca-bundle\") pod \"router-default-5444994796-8xrbm\" (UID: \"522d227a-c827-415e-9e8b-e5907ba83363\") " pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.715333 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.715383 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftj25\" (UniqueName: \"kubernetes.io/projected/522d227a-c827-415e-9e8b-e5907ba83363-kube-api-access-ftj25\") pod \"router-default-5444994796-8xrbm\" (UID: \"522d227a-c827-415e-9e8b-e5907ba83363\") " pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.715428 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/998697c8-1e0d-46ae-b92f-ae8faf0faef5-proxy-tls\") pod \"machine-config-controller-84d6567774-845v8\" (UID: \"998697c8-1e0d-46ae-b92f-ae8faf0faef5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-845v8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.715499 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e6399c54-0b37-424f-8535-f8b0ab33ff52-mountpoint-dir\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.715548 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f42d0c9-6a6b-42c2-8caf-87afbe45c75b-config\") pod \"machine-approver-56656f9798-sjnpq\" (UID: \"3f42d0c9-6a6b-42c2-8caf-87afbe45c75b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.715592 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a-serving-cert\") pod \"authentication-operator-69f744f599-pj7mv\" (UID: \"b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.715637 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4tqj\" (UniqueName: \"kubernetes.io/projected/b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a-kube-api-access-t4tqj\") pod \"authentication-operator-69f744f599-pj7mv\" (UID: \"b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.715685 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/547cf2c3-4842-4d4e-ac24-8b2b1ec93a15-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xfmpj\" (UID: \"547cf2c3-4842-4d4e-ac24-8b2b1ec93a15\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfmpj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.715737 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-279w9\" (UniqueName: \"kubernetes.io/projected/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-kube-api-access-279w9\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.715780 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stgpm\" (UniqueName: \"kubernetes.io/projected/81931b41-8917-4936-9e02-52f7c8c0f1c1-kube-api-access-stgpm\") pod \"olm-operator-6b444d44fb-zjx26\" (UID: \"81931b41-8917-4936-9e02-52f7c8c0f1c1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.715828 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c5ca023-fc82-4365-b2f9-f57220013a9f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bldhq\" (UID: \"1c5ca023-fc82-4365-b2f9-f57220013a9f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bldhq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.715854 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.715876 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/715b331b-b140-461c-9a06-ba6ede3af8b6-apiservice-cert\") pod \"packageserver-d55dfcdfc-tcbfq\" (UID: \"715b331b-b140-461c-9a06-ba6ede3af8b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.716017 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6fc8d511-a907-4f74-9a1c-e262d684b6a5-images\") pod \"machine-config-operator-74547568cd-zfbtq\" (UID: \"6fc8d511-a907-4f74-9a1c-e262d684b6a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.716052 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c04fd14-9dfc-4c0f-8125-8663eac51a45-config\") pod \"kube-apiserver-operator-766d6c64bb-9q42v\" (UID: \"4c04fd14-9dfc-4c0f-8125-8663eac51a45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9q42v" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.716081 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/715b331b-b140-461c-9a06-ba6ede3af8b6-tmpfs\") pod \"packageserver-d55dfcdfc-tcbfq\" (UID: \"715b331b-b140-461c-9a06-ba6ede3af8b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.716580 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.716808 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26869f13-c7ee-411c-85a1-72338142184c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.716960 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxl8c\" (UniqueName: \"kubernetes.io/projected/1c5ca023-fc82-4365-b2f9-f57220013a9f-kube-api-access-qxl8c\") pod \"cluster-samples-operator-665b6dd947-bldhq\" (UID: \"1c5ca023-fc82-4365-b2f9-f57220013a9f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bldhq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.717313 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-trusted-ca-bundle\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.717556 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/998697c8-1e0d-46ae-b92f-ae8faf0faef5-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-845v8\" (UID: \"998697c8-1e0d-46ae-b92f-ae8faf0faef5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-845v8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.717633 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/df2da7f7-2427-4099-ba40-855a7e850256-signing-cabundle\") pod \"service-ca-9c57cc56f-jx49r\" (UID: \"df2da7f7-2427-4099-ba40-855a7e850256\") " pod="openshift-service-ca/service-ca-9c57cc56f-jx49r" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.717772 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.717825 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c04fd14-9dfc-4c0f-8125-8663eac51a45-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9q42v\" (UID: \"4c04fd14-9dfc-4c0f-8125-8663eac51a45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9q42v" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.717877 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh64w\" (UniqueName: \"kubernetes.io/projected/bd39f7e2-211c-4104-a72d-5374a6e95ee1-kube-api-access-nh64w\") pod \"catalog-operator-68c6474976-pkfx8\" (UID: \"bd39f7e2-211c-4104-a72d-5374a6e95ee1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.717929 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/df2da7f7-2427-4099-ba40-855a7e850256-signing-key\") pod \"service-ca-9c57cc56f-jx49r\" (UID: \"df2da7f7-2427-4099-ba40-855a7e850256\") " pod="openshift-service-ca/service-ca-9c57cc56f-jx49r" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.717940 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718034 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62pvb\" (UniqueName: \"kubernetes.io/projected/9a7e80fe-b260-461e-a11b-633a14eb304d-kube-api-access-62pvb\") pod \"openshift-config-operator-7777fb866f-gvnf8\" (UID: \"9a7e80fe-b260-461e-a11b-633a14eb304d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718081 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f9f46b79-f300-42de-a2c3-a35670822a3b-console-oauth-config\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718121 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwssc\" (UniqueName: \"kubernetes.io/projected/547cf2c3-4842-4d4e-ac24-8b2b1ec93a15-kube-api-access-xwssc\") pod \"package-server-manager-789f6589d5-xfmpj\" (UID: \"547cf2c3-4842-4d4e-ac24-8b2b1ec93a15\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfmpj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718156 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93bf45fc-6447-479a-83d0-c9418ecb8270-serving-cert\") pod \"service-ca-operator-777779d784-t7f9j\" (UID: \"93bf45fc-6447-479a-83d0-c9418ecb8270\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t7f9j" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718222 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-image-import-ca\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718262 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l6wf\" (UniqueName: \"kubernetes.io/projected/47777b7a-7599-4366-8e0f-a2ddf382e6ef-kube-api-access-2l6wf\") pod \"cluster-image-registry-operator-dc59b4c8b-6xmms\" (UID: \"47777b7a-7599-4366-8e0f-a2ddf382e6ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718305 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48qgh\" (UniqueName: \"kubernetes.io/projected/3f0021b0-4c6c-4085-9819-5c94471f320c-kube-api-access-48qgh\") pod \"kube-storage-version-migrator-operator-b67b599dd-xt96f\" (UID: \"3f0021b0-4c6c-4085-9819-5c94471f320c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xt96f" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718348 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28fd23a7-1b44-440f-be4a-8c236cf8902b-serving-cert\") pod \"route-controller-manager-6576b87f9c-cnq5q\" (UID: \"28fd23a7-1b44-440f-be4a-8c236cf8902b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718393 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-etcd-serving-ca\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718429 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/522d227a-c827-415e-9e8b-e5907ba83363-default-certificate\") pod \"router-default-5444994796-8xrbm\" (UID: \"522d227a-c827-415e-9e8b-e5907ba83363\") " pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718464 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c04fd14-9dfc-4c0f-8125-8663eac51a45-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9q42v\" (UID: \"4c04fd14-9dfc-4c0f-8125-8663eac51a45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9q42v" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718517 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-serving-cert\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718550 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28fd23a7-1b44-440f-be4a-8c236cf8902b-config\") pod \"route-controller-manager-6576b87f9c-cnq5q\" (UID: \"28fd23a7-1b44-440f-be4a-8c236cf8902b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718583 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9a7e80fe-b260-461e-a11b-633a14eb304d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gvnf8\" (UID: \"9a7e80fe-b260-461e-a11b-633a14eb304d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718617 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fccl8\" (UniqueName: \"kubernetes.io/projected/0a167a2c-fdc1-4d22-83b7-f1a63ab147bc-kube-api-access-fccl8\") pod \"dns-operator-744455d44c-vqskh\" (UID: \"0a167a2c-fdc1-4d22-83b7-f1a63ab147bc\") " pod="openshift-dns-operator/dns-operator-744455d44c-vqskh" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718648 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0a167a2c-fdc1-4d22-83b7-f1a63ab147bc-metrics-tls\") pod \"dns-operator-744455d44c-vqskh\" (UID: \"0a167a2c-fdc1-4d22-83b7-f1a63ab147bc\") " pod="openshift-dns-operator/dns-operator-744455d44c-vqskh" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718689 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvqqv\" (UniqueName: \"kubernetes.io/projected/c7dff6ec-6703-40fb-a94a-c1d8b4641703-kube-api-access-bvqqv\") pod \"openshift-apiserver-operator-796bbdcf4f-qgjzj\" (UID: \"c7dff6ec-6703-40fb-a94a-c1d8b4641703\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qgjzj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718723 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-service-ca\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718754 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26869f13-c7ee-411c-85a1-72338142184c-serving-cert\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718791 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mg62\" (UniqueName: \"kubernetes.io/projected/18e44919-11c5-4974-9c71-ff803e668247-kube-api-access-7mg62\") pod \"controller-manager-879f6c89f-gkgsj\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718822 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718852 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-g2qvz\" (UID: \"fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718886 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/048e17bc-05bf-40e4-9f40-87d936fcf772-config-volume\") pod \"collect-profiles-29524050-46gfc\" (UID: \"048e17bc-05bf-40e4-9f40-87d936fcf772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718918 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3710240-88d7-4611-bd77-6de0c54c1e3c-config\") pod \"kube-controller-manager-operator-78b949d7b-n288z\" (UID: \"c3710240-88d7-4611-bd77-6de0c54c1e3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n288z" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718950 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-audit-policies\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.718982 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719017 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2c6e703e-85e3-4d17-a946-c17e42c27985-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jxmcb\" (UID: \"2c6e703e-85e3-4d17-a946-c17e42c27985\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxmcb" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719055 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/47777b7a-7599-4366-8e0f-a2ddf382e6ef-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-6xmms\" (UID: \"47777b7a-7599-4366-8e0f-a2ddf382e6ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719104 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-etcd-serving-ca\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719107 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/522d227a-c827-415e-9e8b-e5907ba83363-stats-auth\") pod \"router-default-5444994796-8xrbm\" (UID: \"522d227a-c827-415e-9e8b-e5907ba83363\") " pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719163 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbggm\" (UniqueName: \"kubernetes.io/projected/aa8e769a-613b-40f2-9d07-b034d7871302-kube-api-access-sbggm\") pod \"multus-admission-controller-857f4d67dd-nzrr6\" (UID: \"aa8e769a-613b-40f2-9d07-b034d7871302\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nzrr6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719209 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/81931b41-8917-4936-9e02-52f7c8c0f1c1-srv-cert\") pod \"olm-operator-6b444d44fb-zjx26\" (UID: \"81931b41-8917-4936-9e02-52f7c8c0f1c1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719238 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719271 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a7e80fe-b260-461e-a11b-633a14eb304d-serving-cert\") pod \"openshift-config-operator-7777fb866f-gvnf8\" (UID: \"9a7e80fe-b260-461e-a11b-633a14eb304d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719294 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/26869f13-c7ee-411c-85a1-72338142184c-audit-policies\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719318 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04a7bf0c-8c31-4401-b3db-4b5168a0cac7-trusted-ca\") pod \"ingress-operator-5b745b69d9-bch48\" (UID: \"04a7bf0c-8c31-4401-b3db-4b5168a0cac7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719341 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/581a9ff6-cf7b-4bac-bd81-41c6fb080f36-node-bootstrap-token\") pod \"machine-config-server-jsz8m\" (UID: \"581a9ff6-cf7b-4bac-bd81-41c6fb080f36\") " pod="openshift-machine-config-operator/machine-config-server-jsz8m" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719363 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/81931b41-8917-4936-9e02-52f7c8c0f1c1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zjx26\" (UID: \"81931b41-8917-4936-9e02-52f7c8c0f1c1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719386 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f0021b0-4c6c-4085-9819-5c94471f320c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xt96f\" (UID: \"3f0021b0-4c6c-4085-9819-5c94471f320c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xt96f" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719445 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28fd23a7-1b44-440f-be4a-8c236cf8902b-client-ca\") pod \"route-controller-manager-6576b87f9c-cnq5q\" (UID: \"28fd23a7-1b44-440f-be4a-8c236cf8902b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719469 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5ksr\" (UniqueName: \"kubernetes.io/projected/998697c8-1e0d-46ae-b92f-ae8faf0faef5-kube-api-access-n5ksr\") pod \"machine-config-controller-84d6567774-845v8\" (UID: \"998697c8-1e0d-46ae-b92f-ae8faf0faef5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-845v8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719491 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3710240-88d7-4611-bd77-6de0c54c1e3c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-n288z\" (UID: \"c3710240-88d7-4611-bd77-6de0c54c1e3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n288z" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719520 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-gkgsj\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719544 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719566 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdlw8\" (UniqueName: \"kubernetes.io/projected/581a9ff6-cf7b-4bac-bd81-41c6fb080f36-kube-api-access-sdlw8\") pod \"machine-config-server-jsz8m\" (UID: \"581a9ff6-cf7b-4bac-bd81-41c6fb080f36\") " pod="openshift-machine-config-operator/machine-config-server-jsz8m" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719590 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/715b331b-b140-461c-9a06-ba6ede3af8b6-webhook-cert\") pod \"packageserver-d55dfcdfc-tcbfq\" (UID: \"715b331b-b140-461c-9a06-ba6ede3af8b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719610 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5c79p\" (UID: \"e39708f9-5d2d-4ed5-9243-7b71ef470ca7\") " pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719634 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-etcd-client\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719657 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfbq9\" (UniqueName: \"kubernetes.io/projected/3f42d0c9-6a6b-42c2-8caf-87afbe45c75b-kube-api-access-bfbq9\") pod \"machine-approver-56656f9798-sjnpq\" (UID: \"3f42d0c9-6a6b-42c2-8caf-87afbe45c75b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719680 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a-config\") pod \"authentication-operator-69f744f599-pj7mv\" (UID: \"b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719702 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx42h\" (UniqueName: \"kubernetes.io/projected/f9f46b79-f300-42de-a2c3-a35670822a3b-kube-api-access-mx42h\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719724 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-registry-certificates\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719747 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-console-config\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719768 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93bf45fc-6447-479a-83d0-c9418ecb8270-config\") pod \"service-ca-operator-777779d784-t7f9j\" (UID: \"93bf45fc-6447-479a-83d0-c9418ecb8270\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t7f9j" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719790 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e6399c54-0b37-424f-8535-f8b0ab33ff52-csi-data-dir\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719816 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-config\") pod \"controller-manager-879f6c89f-gkgsj\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719840 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d939dc03-30d6-4839-abd8-1d8d1bbf8cad-serving-cert\") pod \"console-operator-58897d9998-f874p\" (UID: \"d939dc03-30d6-4839-abd8-1d8d1bbf8cad\") " pod="openshift-console-operator/console-operator-58897d9998-f874p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719864 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc-images\") pod \"machine-api-operator-5694c8668f-g2qvz\" (UID: \"fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719887 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctkmv\" (UniqueName: \"kubernetes.io/projected/2c6e703e-85e3-4d17-a946-c17e42c27985-kube-api-access-ctkmv\") pod \"control-plane-machine-set-operator-78cbb6b69f-jxmcb\" (UID: \"2c6e703e-85e3-4d17-a946-c17e42c27985\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxmcb" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719914 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm4fr\" (UniqueName: \"kubernetes.io/projected/26869f13-c7ee-411c-85a1-72338142184c-kube-api-access-gm4fr\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719970 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/47777b7a-7599-4366-8e0f-a2ddf382e6ef-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-6xmms\" (UID: \"47777b7a-7599-4366-8e0f-a2ddf382e6ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719980 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-image-import-ca\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.719997 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pj7mv\" (UID: \"b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720025 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6bvz\" (UniqueName: \"kubernetes.io/projected/6fc8d511-a907-4f74-9a1c-e262d684b6a5-kube-api-access-q6bvz\") pod \"machine-config-operator-74547568cd-zfbtq\" (UID: \"6fc8d511-a907-4f74-9a1c-e262d684b6a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720051 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7dff6ec-6703-40fb-a94a-c1d8b4641703-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qgjzj\" (UID: \"c7dff6ec-6703-40fb-a94a-c1d8b4641703\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qgjzj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720077 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5c79p\" (UID: \"e39708f9-5d2d-4ed5-9243-7b71ef470ca7\") " pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720100 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-bound-sa-token\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720123 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-oauth-serving-cert\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720145 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e6399c54-0b37-424f-8535-f8b0ab33ff52-socket-dir\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720167 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e6399c54-0b37-424f-8535-f8b0ab33ff52-registration-dir\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720208 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95t2r\" (UniqueName: \"kubernetes.io/projected/e6399c54-0b37-424f-8535-f8b0ab33ff52-kube-api-access-95t2r\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720232 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-encryption-config\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720255 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-audit-dir\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720289 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/04a7bf0c-8c31-4401-b3db-4b5168a0cac7-metrics-tls\") pod \"ingress-operator-5b745b69d9-bch48\" (UID: \"04a7bf0c-8c31-4401-b3db-4b5168a0cac7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720328 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f9f46b79-f300-42de-a2c3-a35670822a3b-console-serving-cert\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720353 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbm4m\" (UniqueName: \"kubernetes.io/projected/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-kube-api-access-rbm4m\") pod \"marketplace-operator-79b997595-5c79p\" (UID: \"e39708f9-5d2d-4ed5-9243-7b71ef470ca7\") " pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720381 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7dff6ec-6703-40fb-a94a-c1d8b4641703-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qgjzj\" (UID: \"c7dff6ec-6703-40fb-a94a-c1d8b4641703\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qgjzj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720405 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmwdb\" (UniqueName: \"kubernetes.io/projected/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-kube-api-access-kmwdb\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720441 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-node-pullsecrets\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720462 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a-service-ca-bundle\") pod \"authentication-operator-69f744f599-pj7mv\" (UID: \"b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720485 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/522d227a-c827-415e-9e8b-e5907ba83363-metrics-certs\") pod \"router-default-5444994796-8xrbm\" (UID: \"522d227a-c827-415e-9e8b-e5907ba83363\") " pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720506 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74a8d999-1731-4a72-8ca8-25913744a8e7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ng7nn\" (UID: \"74a8d999-1731-4a72-8ca8-25913744a8e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ng7nn" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720531 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2lqf\" (UniqueName: \"kubernetes.io/projected/28fd23a7-1b44-440f-be4a-8c236cf8902b-kube-api-access-b2lqf\") pod \"route-controller-manager-6576b87f9c-cnq5q\" (UID: \"28fd23a7-1b44-440f-be4a-8c236cf8902b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720555 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720580 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/26869f13-c7ee-411c-85a1-72338142184c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720601 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04a7bf0c-8c31-4401-b3db-4b5168a0cac7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bch48\" (UID: \"04a7bf0c-8c31-4401-b3db-4b5168a0cac7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720622 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-496qr\" (UniqueName: \"kubernetes.io/projected/048e17bc-05bf-40e4-9f40-87d936fcf772-kube-api-access-496qr\") pod \"collect-profiles-29524050-46gfc\" (UID: \"048e17bc-05bf-40e4-9f40-87d936fcf772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720662 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/47777b7a-7599-4366-8e0f-a2ddf382e6ef-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-6xmms\" (UID: \"47777b7a-7599-4366-8e0f-a2ddf382e6ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720683 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/048e17bc-05bf-40e4-9f40-87d936fcf772-secret-volume\") pod \"collect-profiles-29524050-46gfc\" (UID: \"048e17bc-05bf-40e4-9f40-87d936fcf772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720721 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720747 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h9pm\" (UniqueName: \"kubernetes.io/projected/6ed62cdb-a7e1-4366-88b7-7c2ed1102203-kube-api-access-7h9pm\") pod \"migrator-59844c95c7-z8hql\" (UID: \"6ed62cdb-a7e1-4366-88b7-7c2ed1102203\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z8hql" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720776 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d939dc03-30d6-4839-abd8-1d8d1bbf8cad-trusted-ca\") pod \"console-operator-58897d9998-f874p\" (UID: \"d939dc03-30d6-4839-abd8-1d8d1bbf8cad\") " pod="openshift-console-operator/console-operator-58897d9998-f874p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720799 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720824 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5ghj\" (UniqueName: \"kubernetes.io/projected/715b331b-b140-461c-9a06-ba6ede3af8b6-kube-api-access-d5ghj\") pod \"packageserver-d55dfcdfc-tcbfq\" (UID: \"715b331b-b140-461c-9a06-ba6ede3af8b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720866 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d939dc03-30d6-4839-abd8-1d8d1bbf8cad-config\") pod \"console-operator-58897d9998-f874p\" (UID: \"d939dc03-30d6-4839-abd8-1d8d1bbf8cad\") " pod="openshift-console-operator/console-operator-58897d9998-f874p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720890 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h82qp\" (UniqueName: \"kubernetes.io/projected/d939dc03-30d6-4839-abd8-1d8d1bbf8cad-kube-api-access-h82qp\") pod \"console-operator-58897d9998-f874p\" (UID: \"d939dc03-30d6-4839-abd8-1d8d1bbf8cad\") " pod="openshift-console-operator/console-operator-58897d9998-f874p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720916 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/26869f13-c7ee-411c-85a1-72338142184c-encryption-config\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720938 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f0021b0-4c6c-4085-9819-5c94471f320c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xt96f\" (UID: \"3f0021b0-4c6c-4085-9819-5c94471f320c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xt96f" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720962 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3f42d0c9-6a6b-42c2-8caf-87afbe45c75b-auth-proxy-config\") pod \"machine-approver-56656f9798-sjnpq\" (UID: \"3f42d0c9-6a6b-42c2-8caf-87afbe45c75b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720984 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f9pv\" (UniqueName: \"kubernetes.io/projected/04a7bf0c-8c31-4401-b3db-4b5168a0cac7-kube-api-access-9f9pv\") pod \"ingress-operator-5b745b69d9-bch48\" (UID: \"04a7bf0c-8c31-4401-b3db-4b5168a0cac7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721006 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/908b160b-0e48-4c2c-a35b-45fe25ca093f-config-volume\") pod \"dns-default-nqdfv\" (UID: \"908b160b-0e48-4c2c-a35b-45fe25ca093f\") " pod="openshift-dns/dns-default-nqdfv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721028 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bd39f7e2-211c-4104-a72d-5374a6e95ee1-profile-collector-cert\") pod \"catalog-operator-68c6474976-pkfx8\" (UID: \"bd39f7e2-211c-4104-a72d-5374a6e95ee1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721062 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6fc8d511-a907-4f74-9a1c-e262d684b6a5-proxy-tls\") pod \"machine-config-operator-74547568cd-zfbtq\" (UID: \"6fc8d511-a907-4f74-9a1c-e262d684b6a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721085 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3f42d0c9-6a6b-42c2-8caf-87afbe45c75b-machine-approver-tls\") pod \"machine-approver-56656f9798-sjnpq\" (UID: \"3f42d0c9-6a6b-42c2-8caf-87afbe45c75b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721107 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-audit-dir\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721131 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/26869f13-c7ee-411c-85a1-72338142184c-etcd-client\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721155 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vjgr\" (UniqueName: \"kubernetes.io/projected/908b160b-0e48-4c2c-a35b-45fe25ca093f-kube-api-access-6vjgr\") pod \"dns-default-nqdfv\" (UID: \"908b160b-0e48-4c2c-a35b-45fe25ca093f\") " pod="openshift-dns/dns-default-nqdfv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721199 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e6399c54-0b37-424f-8535-f8b0ab33ff52-plugins-dir\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721225 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-trusted-ca\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721276 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74a8d999-1731-4a72-8ca8-25913744a8e7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ng7nn\" (UID: \"74a8d999-1731-4a72-8ca8-25913744a8e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ng7nn" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721307 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721353 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721378 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-config\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721425 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/26869f13-c7ee-411c-85a1-72338142184c-audit-dir\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721447 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bac9c1de-1cfe-48d3-aafc-ddb41647c661-cert\") pod \"ingress-canary-rmh4d\" (UID: \"bac9c1de-1cfe-48d3-aafc-ddb41647c661\") " pod="openshift-ingress-canary/ingress-canary-rmh4d" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721497 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-audit\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721717 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc-config\") pod \"machine-api-operator-5694c8668f-g2qvz\" (UID: \"fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721759 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/908b160b-0e48-4c2c-a35b-45fe25ca093f-metrics-tls\") pod \"dns-default-nqdfv\" (UID: \"908b160b-0e48-4c2c-a35b-45fe25ca093f\") " pod="openshift-dns/dns-default-nqdfv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721784 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3710240-88d7-4611-bd77-6de0c54c1e3c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-n288z\" (UID: \"c3710240-88d7-4611-bd77-6de0c54c1e3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n288z" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721811 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxwvv\" (UniqueName: \"kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-kube-api-access-kxwvv\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721835 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-registry-tls\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721861 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721885 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-client-ca\") pod \"controller-manager-879f6c89f-gkgsj\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.721910 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18e44919-11c5-4974-9c71-ff803e668247-serving-cert\") pod \"controller-manager-879f6c89f-gkgsj\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.722087 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28fd23a7-1b44-440f-be4a-8c236cf8902b-config\") pod \"route-controller-manager-6576b87f9c-cnq5q\" (UID: \"28fd23a7-1b44-440f-be4a-8c236cf8902b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.722399 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9a7e80fe-b260-461e-a11b-633a14eb304d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gvnf8\" (UID: \"9a7e80fe-b260-461e-a11b-633a14eb304d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.723057 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-gkgsj\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.723161 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-service-ca\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.723432 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a-config\") pod \"authentication-operator-69f744f599-pj7mv\" (UID: \"b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.723752 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28fd23a7-1b44-440f-be4a-8c236cf8902b-client-ca\") pod \"route-controller-manager-6576b87f9c-cnq5q\" (UID: \"28fd23a7-1b44-440f-be4a-8c236cf8902b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.724332 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.724421 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-audit-policies\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.720780 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.724971 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f42d0c9-6a6b-42c2-8caf-87afbe45c75b-config\") pod \"machine-approver-56656f9798-sjnpq\" (UID: \"3f42d0c9-6a6b-42c2-8caf-87afbe45c75b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.725390 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.726419 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c5ca023-fc82-4365-b2f9-f57220013a9f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bldhq\" (UID: \"1c5ca023-fc82-4365-b2f9-f57220013a9f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bldhq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.726882 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a-serving-cert\") pod \"authentication-operator-69f744f599-pj7mv\" (UID: \"b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.726968 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d939dc03-30d6-4839-abd8-1d8d1bbf8cad-config\") pod \"console-operator-58897d9998-f874p\" (UID: \"d939dc03-30d6-4839-abd8-1d8d1bbf8cad\") " pod="openshift-console-operator/console-operator-58897d9998-f874p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.727613 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-etcd-client\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.727979 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-serving-cert\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.728110 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.728964 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-node-pullsecrets\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.729744 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a-service-ca-bundle\") pod \"authentication-operator-69f744f599-pj7mv\" (UID: \"b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.729810 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/26869f13-c7ee-411c-85a1-72338142184c-encryption-config\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: E0218 19:36:17.730114 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:18.230090836 +0000 UTC m=+141.812045721 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.730367 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3f42d0c9-6a6b-42c2-8caf-87afbe45c75b-auth-proxy-config\") pod \"machine-approver-56656f9798-sjnpq\" (UID: \"3f42d0c9-6a6b-42c2-8caf-87afbe45c75b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.730672 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/26869f13-c7ee-411c-85a1-72338142184c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.731083 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-oauth-serving-cert\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.732116 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7dff6ec-6703-40fb-a94a-c1d8b4641703-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qgjzj\" (UID: \"c7dff6ec-6703-40fb-a94a-c1d8b4641703\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qgjzj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.732352 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-audit-dir\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.732465 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc-images\") pod \"machine-api-operator-5694c8668f-g2qvz\" (UID: \"fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.732694 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.736278 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/47777b7a-7599-4366-8e0f-a2ddf382e6ef-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-6xmms\" (UID: \"47777b7a-7599-4366-8e0f-a2ddf382e6ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.736605 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f9f46b79-f300-42de-a2c3-a35670822a3b-console-oauth-config\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.736666 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7dff6ec-6703-40fb-a94a-c1d8b4641703-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qgjzj\" (UID: \"c7dff6ec-6703-40fb-a94a-c1d8b4641703\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qgjzj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.737668 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.737714 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f9f46b79-f300-42de-a2c3-a35670822a3b-console-serving-cert\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.737987 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.738150 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-g2qvz\" (UID: \"fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.738515 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a7e80fe-b260-461e-a11b-633a14eb304d-serving-cert\") pod \"openshift-config-operator-7777fb866f-gvnf8\" (UID: \"9a7e80fe-b260-461e-a11b-633a14eb304d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.738786 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28fd23a7-1b44-440f-be4a-8c236cf8902b-serving-cert\") pod \"route-controller-manager-6576b87f9c-cnq5q\" (UID: \"28fd23a7-1b44-440f-be4a-8c236cf8902b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.738882 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-encryption-config\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.738989 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.739801 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-config\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.741358 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.741705 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.741887 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.741955 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc-config\") pod \"machine-api-operator-5694c8668f-g2qvz\" (UID: \"fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.742279 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pj7mv\" (UID: \"b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.742598 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d939dc03-30d6-4839-abd8-1d8d1bbf8cad-serving-cert\") pod \"console-operator-58897d9998-f874p\" (UID: \"d939dc03-30d6-4839-abd8-1d8d1bbf8cad\") " pod="openshift-console-operator/console-operator-58897d9998-f874p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.743328 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-audit-dir\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.743736 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-audit\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.744764 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.745052 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/26869f13-c7ee-411c-85a1-72338142184c-audit-policies\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.745145 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/26869f13-c7ee-411c-85a1-72338142184c-audit-dir\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.746272 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-registry-certificates\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.746764 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-console-config\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.748847 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-trusted-ca\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.748931 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d939dc03-30d6-4839-abd8-1d8d1bbf8cad-trusted-ca\") pod \"console-operator-58897d9998-f874p\" (UID: \"d939dc03-30d6-4839-abd8-1d8d1bbf8cad\") " pod="openshift-console-operator/console-operator-58897d9998-f874p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.750142 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.751757 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/47777b7a-7599-4366-8e0f-a2ddf382e6ef-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-6xmms\" (UID: \"47777b7a-7599-4366-8e0f-a2ddf382e6ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.752986 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26869f13-c7ee-411c-85a1-72338142184c-serving-cert\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.754264 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/26869f13-c7ee-411c-85a1-72338142184c-etcd-client\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.754799 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-registry-tls\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.760885 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.764963 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-client-ca\") pod \"controller-manager-879f6c89f-gkgsj\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.786895 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.787972 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gk6st" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.798770 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-config\") pod \"controller-manager-879f6c89f-gkgsj\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.801337 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.812629 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.821456 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.822849 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823013 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/998697c8-1e0d-46ae-b92f-ae8faf0faef5-proxy-tls\") pod \"machine-config-controller-84d6567774-845v8\" (UID: \"998697c8-1e0d-46ae-b92f-ae8faf0faef5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-845v8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823047 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftj25\" (UniqueName: \"kubernetes.io/projected/522d227a-c827-415e-9e8b-e5907ba83363-kube-api-access-ftj25\") pod \"router-default-5444994796-8xrbm\" (UID: \"522d227a-c827-415e-9e8b-e5907ba83363\") " pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:17 crc kubenswrapper[4932]: E0218 19:36:17.823076 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:18.323045548 +0000 UTC m=+141.905000433 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823122 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e6399c54-0b37-424f-8535-f8b0ab33ff52-mountpoint-dir\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823223 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/547cf2c3-4842-4d4e-ac24-8b2b1ec93a15-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xfmpj\" (UID: \"547cf2c3-4842-4d4e-ac24-8b2b1ec93a15\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfmpj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823256 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e6399c54-0b37-424f-8535-f8b0ab33ff52-mountpoint-dir\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823274 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stgpm\" (UniqueName: \"kubernetes.io/projected/81931b41-8917-4936-9e02-52f7c8c0f1c1-kube-api-access-stgpm\") pod \"olm-operator-6b444d44fb-zjx26\" (UID: \"81931b41-8917-4936-9e02-52f7c8c0f1c1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823311 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/715b331b-b140-461c-9a06-ba6ede3af8b6-apiservice-cert\") pod \"packageserver-d55dfcdfc-tcbfq\" (UID: \"715b331b-b140-461c-9a06-ba6ede3af8b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823342 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6fc8d511-a907-4f74-9a1c-e262d684b6a5-images\") pod \"machine-config-operator-74547568cd-zfbtq\" (UID: \"6fc8d511-a907-4f74-9a1c-e262d684b6a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823375 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c04fd14-9dfc-4c0f-8125-8663eac51a45-config\") pod \"kube-apiserver-operator-766d6c64bb-9q42v\" (UID: \"4c04fd14-9dfc-4c0f-8125-8663eac51a45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9q42v" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823407 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/715b331b-b140-461c-9a06-ba6ede3af8b6-tmpfs\") pod \"packageserver-d55dfcdfc-tcbfq\" (UID: \"715b331b-b140-461c-9a06-ba6ede3af8b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823455 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/998697c8-1e0d-46ae-b92f-ae8faf0faef5-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-845v8\" (UID: \"998697c8-1e0d-46ae-b92f-ae8faf0faef5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-845v8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823489 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/df2da7f7-2427-4099-ba40-855a7e850256-signing-cabundle\") pod \"service-ca-9c57cc56f-jx49r\" (UID: \"df2da7f7-2427-4099-ba40-855a7e850256\") " pod="openshift-service-ca/service-ca-9c57cc56f-jx49r" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823523 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c04fd14-9dfc-4c0f-8125-8663eac51a45-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9q42v\" (UID: \"4c04fd14-9dfc-4c0f-8125-8663eac51a45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9q42v" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823555 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh64w\" (UniqueName: \"kubernetes.io/projected/bd39f7e2-211c-4104-a72d-5374a6e95ee1-kube-api-access-nh64w\") pod \"catalog-operator-68c6474976-pkfx8\" (UID: \"bd39f7e2-211c-4104-a72d-5374a6e95ee1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823589 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/df2da7f7-2427-4099-ba40-855a7e850256-signing-key\") pod \"service-ca-9c57cc56f-jx49r\" (UID: \"df2da7f7-2427-4099-ba40-855a7e850256\") " pod="openshift-service-ca/service-ca-9c57cc56f-jx49r" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823656 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwssc\" (UniqueName: \"kubernetes.io/projected/547cf2c3-4842-4d4e-ac24-8b2b1ec93a15-kube-api-access-xwssc\") pod \"package-server-manager-789f6589d5-xfmpj\" (UID: \"547cf2c3-4842-4d4e-ac24-8b2b1ec93a15\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfmpj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823687 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93bf45fc-6447-479a-83d0-c9418ecb8270-serving-cert\") pod \"service-ca-operator-777779d784-t7f9j\" (UID: \"93bf45fc-6447-479a-83d0-c9418ecb8270\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t7f9j" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823725 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48qgh\" (UniqueName: \"kubernetes.io/projected/3f0021b0-4c6c-4085-9819-5c94471f320c-kube-api-access-48qgh\") pod \"kube-storage-version-migrator-operator-b67b599dd-xt96f\" (UID: \"3f0021b0-4c6c-4085-9819-5c94471f320c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xt96f" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823775 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/522d227a-c827-415e-9e8b-e5907ba83363-default-certificate\") pod \"router-default-5444994796-8xrbm\" (UID: \"522d227a-c827-415e-9e8b-e5907ba83363\") " pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823804 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c04fd14-9dfc-4c0f-8125-8663eac51a45-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9q42v\" (UID: \"4c04fd14-9dfc-4c0f-8125-8663eac51a45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9q42v" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823842 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fccl8\" (UniqueName: \"kubernetes.io/projected/0a167a2c-fdc1-4d22-83b7-f1a63ab147bc-kube-api-access-fccl8\") pod \"dns-operator-744455d44c-vqskh\" (UID: \"0a167a2c-fdc1-4d22-83b7-f1a63ab147bc\") " pod="openshift-dns-operator/dns-operator-744455d44c-vqskh" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823902 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0a167a2c-fdc1-4d22-83b7-f1a63ab147bc-metrics-tls\") pod \"dns-operator-744455d44c-vqskh\" (UID: \"0a167a2c-fdc1-4d22-83b7-f1a63ab147bc\") " pod="openshift-dns-operator/dns-operator-744455d44c-vqskh" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823938 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/048e17bc-05bf-40e4-9f40-87d936fcf772-config-volume\") pod \"collect-profiles-29524050-46gfc\" (UID: \"048e17bc-05bf-40e4-9f40-87d936fcf772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.823969 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3710240-88d7-4611-bd77-6de0c54c1e3c-config\") pod \"kube-controller-manager-operator-78b949d7b-n288z\" (UID: \"c3710240-88d7-4611-bd77-6de0c54c1e3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n288z" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824003 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/715b331b-b140-461c-9a06-ba6ede3af8b6-tmpfs\") pod \"packageserver-d55dfcdfc-tcbfq\" (UID: \"715b331b-b140-461c-9a06-ba6ede3af8b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824013 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2c6e703e-85e3-4d17-a946-c17e42c27985-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jxmcb\" (UID: \"2c6e703e-85e3-4d17-a946-c17e42c27985\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxmcb" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824100 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/522d227a-c827-415e-9e8b-e5907ba83363-stats-auth\") pod \"router-default-5444994796-8xrbm\" (UID: \"522d227a-c827-415e-9e8b-e5907ba83363\") " pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824130 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbggm\" (UniqueName: \"kubernetes.io/projected/aa8e769a-613b-40f2-9d07-b034d7871302-kube-api-access-sbggm\") pod \"multus-admission-controller-857f4d67dd-nzrr6\" (UID: \"aa8e769a-613b-40f2-9d07-b034d7871302\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nzrr6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824154 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/81931b41-8917-4936-9e02-52f7c8c0f1c1-srv-cert\") pod \"olm-operator-6b444d44fb-zjx26\" (UID: \"81931b41-8917-4936-9e02-52f7c8c0f1c1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824205 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04a7bf0c-8c31-4401-b3db-4b5168a0cac7-trusted-ca\") pod \"ingress-operator-5b745b69d9-bch48\" (UID: \"04a7bf0c-8c31-4401-b3db-4b5168a0cac7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824227 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/581a9ff6-cf7b-4bac-bd81-41c6fb080f36-node-bootstrap-token\") pod \"machine-config-server-jsz8m\" (UID: \"581a9ff6-cf7b-4bac-bd81-41c6fb080f36\") " pod="openshift-machine-config-operator/machine-config-server-jsz8m" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824255 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/81931b41-8917-4936-9e02-52f7c8c0f1c1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zjx26\" (UID: \"81931b41-8917-4936-9e02-52f7c8c0f1c1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824275 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f0021b0-4c6c-4085-9819-5c94471f320c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xt96f\" (UID: \"3f0021b0-4c6c-4085-9819-5c94471f320c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xt96f" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824303 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5ksr\" (UniqueName: \"kubernetes.io/projected/998697c8-1e0d-46ae-b92f-ae8faf0faef5-kube-api-access-n5ksr\") pod \"machine-config-controller-84d6567774-845v8\" (UID: \"998697c8-1e0d-46ae-b92f-ae8faf0faef5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-845v8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824324 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3710240-88d7-4611-bd77-6de0c54c1e3c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-n288z\" (UID: \"c3710240-88d7-4611-bd77-6de0c54c1e3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n288z" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824347 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdlw8\" (UniqueName: \"kubernetes.io/projected/581a9ff6-cf7b-4bac-bd81-41c6fb080f36-kube-api-access-sdlw8\") pod \"machine-config-server-jsz8m\" (UID: \"581a9ff6-cf7b-4bac-bd81-41c6fb080f36\") " pod="openshift-machine-config-operator/machine-config-server-jsz8m" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824367 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/715b331b-b140-461c-9a06-ba6ede3af8b6-webhook-cert\") pod \"packageserver-d55dfcdfc-tcbfq\" (UID: \"715b331b-b140-461c-9a06-ba6ede3af8b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824393 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5c79p\" (UID: \"e39708f9-5d2d-4ed5-9243-7b71ef470ca7\") " pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824435 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93bf45fc-6447-479a-83d0-c9418ecb8270-config\") pod \"service-ca-operator-777779d784-t7f9j\" (UID: \"93bf45fc-6447-479a-83d0-c9418ecb8270\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t7f9j" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824456 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e6399c54-0b37-424f-8535-f8b0ab33ff52-csi-data-dir\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824484 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctkmv\" (UniqueName: \"kubernetes.io/projected/2c6e703e-85e3-4d17-a946-c17e42c27985-kube-api-access-ctkmv\") pod \"control-plane-machine-set-operator-78cbb6b69f-jxmcb\" (UID: \"2c6e703e-85e3-4d17-a946-c17e42c27985\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxmcb" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824522 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6bvz\" (UniqueName: \"kubernetes.io/projected/6fc8d511-a907-4f74-9a1c-e262d684b6a5-kube-api-access-q6bvz\") pod \"machine-config-operator-74547568cd-zfbtq\" (UID: \"6fc8d511-a907-4f74-9a1c-e262d684b6a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824544 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e6399c54-0b37-424f-8535-f8b0ab33ff52-socket-dir\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824564 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e6399c54-0b37-424f-8535-f8b0ab33ff52-registration-dir\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824585 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95t2r\" (UniqueName: \"kubernetes.io/projected/e6399c54-0b37-424f-8535-f8b0ab33ff52-kube-api-access-95t2r\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824607 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5c79p\" (UID: \"e39708f9-5d2d-4ed5-9243-7b71ef470ca7\") " pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824638 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/04a7bf0c-8c31-4401-b3db-4b5168a0cac7-metrics-tls\") pod \"ingress-operator-5b745b69d9-bch48\" (UID: \"04a7bf0c-8c31-4401-b3db-4b5168a0cac7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824672 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbm4m\" (UniqueName: \"kubernetes.io/projected/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-kube-api-access-rbm4m\") pod \"marketplace-operator-79b997595-5c79p\" (UID: \"e39708f9-5d2d-4ed5-9243-7b71ef470ca7\") " pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824705 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/522d227a-c827-415e-9e8b-e5907ba83363-metrics-certs\") pod \"router-default-5444994796-8xrbm\" (UID: \"522d227a-c827-415e-9e8b-e5907ba83363\") " pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824726 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74a8d999-1731-4a72-8ca8-25913744a8e7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ng7nn\" (UID: \"74a8d999-1731-4a72-8ca8-25913744a8e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ng7nn" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824758 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04a7bf0c-8c31-4401-b3db-4b5168a0cac7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bch48\" (UID: \"04a7bf0c-8c31-4401-b3db-4b5168a0cac7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824782 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-496qr\" (UniqueName: \"kubernetes.io/projected/048e17bc-05bf-40e4-9f40-87d936fcf772-kube-api-access-496qr\") pod \"collect-profiles-29524050-46gfc\" (UID: \"048e17bc-05bf-40e4-9f40-87d936fcf772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824811 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/048e17bc-05bf-40e4-9f40-87d936fcf772-secret-volume\") pod \"collect-profiles-29524050-46gfc\" (UID: \"048e17bc-05bf-40e4-9f40-87d936fcf772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824835 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h9pm\" (UniqueName: \"kubernetes.io/projected/6ed62cdb-a7e1-4366-88b7-7c2ed1102203-kube-api-access-7h9pm\") pod \"migrator-59844c95c7-z8hql\" (UID: \"6ed62cdb-a7e1-4366-88b7-7c2ed1102203\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z8hql" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824859 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5ghj\" (UniqueName: \"kubernetes.io/projected/715b331b-b140-461c-9a06-ba6ede3af8b6-kube-api-access-d5ghj\") pod \"packageserver-d55dfcdfc-tcbfq\" (UID: \"715b331b-b140-461c-9a06-ba6ede3af8b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824846 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c04fd14-9dfc-4c0f-8125-8663eac51a45-config\") pod \"kube-apiserver-operator-766d6c64bb-9q42v\" (UID: \"4c04fd14-9dfc-4c0f-8125-8663eac51a45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9q42v" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824891 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f0021b0-4c6c-4085-9819-5c94471f320c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xt96f\" (UID: \"3f0021b0-4c6c-4085-9819-5c94471f320c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xt96f" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824915 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/908b160b-0e48-4c2c-a35b-45fe25ca093f-config-volume\") pod \"dns-default-nqdfv\" (UID: \"908b160b-0e48-4c2c-a35b-45fe25ca093f\") " pod="openshift-dns/dns-default-nqdfv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824938 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bd39f7e2-211c-4104-a72d-5374a6e95ee1-profile-collector-cert\") pod \"catalog-operator-68c6474976-pkfx8\" (UID: \"bd39f7e2-211c-4104-a72d-5374a6e95ee1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824964 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f9pv\" (UniqueName: \"kubernetes.io/projected/04a7bf0c-8c31-4401-b3db-4b5168a0cac7-kube-api-access-9f9pv\") pod \"ingress-operator-5b745b69d9-bch48\" (UID: \"04a7bf0c-8c31-4401-b3db-4b5168a0cac7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.824987 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6fc8d511-a907-4f74-9a1c-e262d684b6a5-proxy-tls\") pod \"machine-config-operator-74547568cd-zfbtq\" (UID: \"6fc8d511-a907-4f74-9a1c-e262d684b6a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.825011 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vjgr\" (UniqueName: \"kubernetes.io/projected/908b160b-0e48-4c2c-a35b-45fe25ca093f-kube-api-access-6vjgr\") pod \"dns-default-nqdfv\" (UID: \"908b160b-0e48-4c2c-a35b-45fe25ca093f\") " pod="openshift-dns/dns-default-nqdfv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.825032 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e6399c54-0b37-424f-8535-f8b0ab33ff52-plugins-dir\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.825065 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bac9c1de-1cfe-48d3-aafc-ddb41647c661-cert\") pod \"ingress-canary-rmh4d\" (UID: \"bac9c1de-1cfe-48d3-aafc-ddb41647c661\") " pod="openshift-ingress-canary/ingress-canary-rmh4d" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.825087 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74a8d999-1731-4a72-8ca8-25913744a8e7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ng7nn\" (UID: \"74a8d999-1731-4a72-8ca8-25913744a8e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ng7nn" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.825117 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.825143 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/908b160b-0e48-4c2c-a35b-45fe25ca093f-metrics-tls\") pod \"dns-default-nqdfv\" (UID: \"908b160b-0e48-4c2c-a35b-45fe25ca093f\") " pod="openshift-dns/dns-default-nqdfv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.825164 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3710240-88d7-4611-bd77-6de0c54c1e3c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-n288z\" (UID: \"c3710240-88d7-4611-bd77-6de0c54c1e3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n288z" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.825230 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aa8e769a-613b-40f2-9d07-b034d7871302-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nzrr6\" (UID: \"aa8e769a-613b-40f2-9d07-b034d7871302\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nzrr6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.825252 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/581a9ff6-cf7b-4bac-bd81-41c6fb080f36-certs\") pod \"machine-config-server-jsz8m\" (UID: \"581a9ff6-cf7b-4bac-bd81-41c6fb080f36\") " pod="openshift-machine-config-operator/machine-config-server-jsz8m" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.825272 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/74a8d999-1731-4a72-8ca8-25913744a8e7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ng7nn\" (UID: \"74a8d999-1731-4a72-8ca8-25913744a8e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ng7nn" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.825298 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vhpt\" (UniqueName: \"kubernetes.io/projected/bac9c1de-1cfe-48d3-aafc-ddb41647c661-kube-api-access-8vhpt\") pod \"ingress-canary-rmh4d\" (UID: \"bac9c1de-1cfe-48d3-aafc-ddb41647c661\") " pod="openshift-ingress-canary/ingress-canary-rmh4d" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.825320 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bd39f7e2-211c-4104-a72d-5374a6e95ee1-srv-cert\") pod \"catalog-operator-68c6474976-pkfx8\" (UID: \"bd39f7e2-211c-4104-a72d-5374a6e95ee1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.825346 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prh8n\" (UniqueName: \"kubernetes.io/projected/93bf45fc-6447-479a-83d0-c9418ecb8270-kube-api-access-prh8n\") pod \"service-ca-operator-777779d784-t7f9j\" (UID: \"93bf45fc-6447-479a-83d0-c9418ecb8270\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t7f9j" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.825380 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6fc8d511-a907-4f74-9a1c-e262d684b6a5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-zfbtq\" (UID: \"6fc8d511-a907-4f74-9a1c-e262d684b6a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.825404 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkzpp\" (UniqueName: \"kubernetes.io/projected/df2da7f7-2427-4099-ba40-855a7e850256-kube-api-access-xkzpp\") pod \"service-ca-9c57cc56f-jx49r\" (UID: \"df2da7f7-2427-4099-ba40-855a7e850256\") " pod="openshift-service-ca/service-ca-9c57cc56f-jx49r" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.825434 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/522d227a-c827-415e-9e8b-e5907ba83363-service-ca-bundle\") pod \"router-default-5444994796-8xrbm\" (UID: \"522d227a-c827-415e-9e8b-e5907ba83363\") " pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.826332 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/522d227a-c827-415e-9e8b-e5907ba83363-service-ca-bundle\") pod \"router-default-5444994796-8xrbm\" (UID: \"522d227a-c827-415e-9e8b-e5907ba83363\") " pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.826925 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e6399c54-0b37-424f-8535-f8b0ab33ff52-csi-data-dir\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.827290 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e6399c54-0b37-424f-8535-f8b0ab33ff52-socket-dir\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.827352 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e6399c54-0b37-424f-8535-f8b0ab33ff52-registration-dir\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.827876 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93bf45fc-6447-479a-83d0-c9418ecb8270-config\") pod \"service-ca-operator-777779d784-t7f9j\" (UID: \"93bf45fc-6447-479a-83d0-c9418ecb8270\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t7f9j" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.828718 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/715b331b-b140-461c-9a06-ba6ede3af8b6-apiservice-cert\") pod \"packageserver-d55dfcdfc-tcbfq\" (UID: \"715b331b-b140-461c-9a06-ba6ede3af8b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.829621 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/547cf2c3-4842-4d4e-ac24-8b2b1ec93a15-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xfmpj\" (UID: \"547cf2c3-4842-4d4e-ac24-8b2b1ec93a15\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfmpj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.830333 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04a7bf0c-8c31-4401-b3db-4b5168a0cac7-trusted-ca\") pod \"ingress-operator-5b745b69d9-bch48\" (UID: \"04a7bf0c-8c31-4401-b3db-4b5168a0cac7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.831028 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5c79p\" (UID: \"e39708f9-5d2d-4ed5-9243-7b71ef470ca7\") " pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.831410 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/df2da7f7-2427-4099-ba40-855a7e850256-signing-cabundle\") pod \"service-ca-9c57cc56f-jx49r\" (UID: \"df2da7f7-2427-4099-ba40-855a7e850256\") " pod="openshift-service-ca/service-ca-9c57cc56f-jx49r" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.831989 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/81931b41-8917-4936-9e02-52f7c8c0f1c1-srv-cert\") pod \"olm-operator-6b444d44fb-zjx26\" (UID: \"81931b41-8917-4936-9e02-52f7c8c0f1c1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.832111 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/522d227a-c827-415e-9e8b-e5907ba83363-stats-auth\") pod \"router-default-5444994796-8xrbm\" (UID: \"522d227a-c827-415e-9e8b-e5907ba83363\") " pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.832555 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/998697c8-1e0d-46ae-b92f-ae8faf0faef5-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-845v8\" (UID: \"998697c8-1e0d-46ae-b92f-ae8faf0faef5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-845v8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.833828 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/581a9ff6-cf7b-4bac-bd81-41c6fb080f36-node-bootstrap-token\") pod \"machine-config-server-jsz8m\" (UID: \"581a9ff6-cf7b-4bac-bd81-41c6fb080f36\") " pod="openshift-machine-config-operator/machine-config-server-jsz8m" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.835345 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/81931b41-8917-4936-9e02-52f7c8c0f1c1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zjx26\" (UID: \"81931b41-8917-4936-9e02-52f7c8c0f1c1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.835413 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/048e17bc-05bf-40e4-9f40-87d936fcf772-secret-volume\") pod \"collect-profiles-29524050-46gfc\" (UID: \"048e17bc-05bf-40e4-9f40-87d936fcf772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.835581 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6fc8d511-a907-4f74-9a1c-e262d684b6a5-images\") pod \"machine-config-operator-74547568cd-zfbtq\" (UID: \"6fc8d511-a907-4f74-9a1c-e262d684b6a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.835810 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93bf45fc-6447-479a-83d0-c9418ecb8270-serving-cert\") pod \"service-ca-operator-777779d784-t7f9j\" (UID: \"93bf45fc-6447-479a-83d0-c9418ecb8270\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t7f9j" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.835995 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/998697c8-1e0d-46ae-b92f-ae8faf0faef5-proxy-tls\") pod \"machine-config-controller-84d6567774-845v8\" (UID: \"998697c8-1e0d-46ae-b92f-ae8faf0faef5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-845v8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.836237 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f0021b0-4c6c-4085-9819-5c94471f320c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xt96f\" (UID: \"3f0021b0-4c6c-4085-9819-5c94471f320c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xt96f" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.836377 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3710240-88d7-4611-bd77-6de0c54c1e3c-config\") pod \"kube-controller-manager-operator-78b949d7b-n288z\" (UID: \"c3710240-88d7-4611-bd77-6de0c54c1e3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n288z" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.836579 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/048e17bc-05bf-40e4-9f40-87d936fcf772-config-volume\") pod \"collect-profiles-29524050-46gfc\" (UID: \"048e17bc-05bf-40e4-9f40-87d936fcf772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.836889 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74a8d999-1731-4a72-8ca8-25913744a8e7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ng7nn\" (UID: \"74a8d999-1731-4a72-8ca8-25913744a8e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ng7nn" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.837138 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/04a7bf0c-8c31-4401-b3db-4b5168a0cac7-metrics-tls\") pod \"ingress-operator-5b745b69d9-bch48\" (UID: \"04a7bf0c-8c31-4401-b3db-4b5168a0cac7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.837490 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2c6e703e-85e3-4d17-a946-c17e42c27985-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jxmcb\" (UID: \"2c6e703e-85e3-4d17-a946-c17e42c27985\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxmcb" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.837520 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/908b160b-0e48-4c2c-a35b-45fe25ca093f-config-volume\") pod \"dns-default-nqdfv\" (UID: \"908b160b-0e48-4c2c-a35b-45fe25ca093f\") " pod="openshift-dns/dns-default-nqdfv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.837532 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c04fd14-9dfc-4c0f-8125-8663eac51a45-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9q42v\" (UID: \"4c04fd14-9dfc-4c0f-8125-8663eac51a45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9q42v" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.838391 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5c79p\" (UID: \"e39708f9-5d2d-4ed5-9243-7b71ef470ca7\") " pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.839573 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3710240-88d7-4611-bd77-6de0c54c1e3c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-n288z\" (UID: \"c3710240-88d7-4611-bd77-6de0c54c1e3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n288z" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.840121 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0a167a2c-fdc1-4d22-83b7-f1a63ab147bc-metrics-tls\") pod \"dns-operator-744455d44c-vqskh\" (UID: \"0a167a2c-fdc1-4d22-83b7-f1a63ab147bc\") " pod="openshift-dns-operator/dns-operator-744455d44c-vqskh" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.840243 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6fc8d511-a907-4f74-9a1c-e262d684b6a5-proxy-tls\") pod \"machine-config-operator-74547568cd-zfbtq\" (UID: \"6fc8d511-a907-4f74-9a1c-e262d684b6a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.840345 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e6399c54-0b37-424f-8535-f8b0ab33ff52-plugins-dir\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.840481 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6fc8d511-a907-4f74-9a1c-e262d684b6a5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-zfbtq\" (UID: \"6fc8d511-a907-4f74-9a1c-e262d684b6a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.840640 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/522d227a-c827-415e-9e8b-e5907ba83363-metrics-certs\") pod \"router-default-5444994796-8xrbm\" (UID: \"522d227a-c827-415e-9e8b-e5907ba83363\") " pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:17 crc kubenswrapper[4932]: E0218 19:36:17.840936 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:18.340917146 +0000 UTC m=+141.922872081 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.841611 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.843249 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/908b160b-0e48-4c2c-a35b-45fe25ca093f-metrics-tls\") pod \"dns-default-nqdfv\" (UID: \"908b160b-0e48-4c2c-a35b-45fe25ca093f\") " pod="openshift-dns/dns-default-nqdfv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.843825 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74a8d999-1731-4a72-8ca8-25913744a8e7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ng7nn\" (UID: \"74a8d999-1731-4a72-8ca8-25913744a8e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ng7nn" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.844264 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f0021b0-4c6c-4085-9819-5c94471f320c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xt96f\" (UID: \"3f0021b0-4c6c-4085-9819-5c94471f320c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xt96f" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.844553 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bd39f7e2-211c-4104-a72d-5374a6e95ee1-srv-cert\") pod \"catalog-operator-68c6474976-pkfx8\" (UID: \"bd39f7e2-211c-4104-a72d-5374a6e95ee1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.844664 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/df2da7f7-2427-4099-ba40-855a7e850256-signing-key\") pod \"service-ca-9c57cc56f-jx49r\" (UID: \"df2da7f7-2427-4099-ba40-855a7e850256\") " pod="openshift-service-ca/service-ca-9c57cc56f-jx49r" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.845234 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/522d227a-c827-415e-9e8b-e5907ba83363-default-certificate\") pod \"router-default-5444994796-8xrbm\" (UID: \"522d227a-c827-415e-9e8b-e5907ba83363\") " pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.845357 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/715b331b-b140-461c-9a06-ba6ede3af8b6-webhook-cert\") pod \"packageserver-d55dfcdfc-tcbfq\" (UID: \"715b331b-b140-461c-9a06-ba6ede3af8b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.845489 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/aa8e769a-613b-40f2-9d07-b034d7871302-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nzrr6\" (UID: \"aa8e769a-613b-40f2-9d07-b034d7871302\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nzrr6" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.845756 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/581a9ff6-cf7b-4bac-bd81-41c6fb080f36-certs\") pod \"machine-config-server-jsz8m\" (UID: \"581a9ff6-cf7b-4bac-bd81-41c6fb080f36\") " pod="openshift-machine-config-operator/machine-config-server-jsz8m" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.845964 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bac9c1de-1cfe-48d3-aafc-ddb41647c661-cert\") pod \"ingress-canary-rmh4d\" (UID: \"bac9c1de-1cfe-48d3-aafc-ddb41647c661\") " pod="openshift-ingress-canary/ingress-canary-rmh4d" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.845978 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18e44919-11c5-4974-9c71-ff803e668247-serving-cert\") pod \"controller-manager-879f6c89f-gkgsj\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.851486 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bd39f7e2-211c-4104-a72d-5374a6e95ee1-profile-collector-cert\") pod \"catalog-operator-68c6474976-pkfx8\" (UID: \"bd39f7e2-211c-4104-a72d-5374a6e95ee1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.861221 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.881756 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.887212 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3f42d0c9-6a6b-42c2-8caf-87afbe45c75b-machine-approver-tls\") pod \"machine-approver-56656f9798-sjnpq\" (UID: \"3f42d0c9-6a6b-42c2-8caf-87afbe45c75b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.927225 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:17 crc kubenswrapper[4932]: E0218 19:36:17.927543 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:18.427514347 +0000 UTC m=+142.009469202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.937009 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f446d\" (UniqueName: \"kubernetes.io/projected/fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc-kube-api-access-f446d\") pod \"machine-api-operator-5694c8668f-g2qvz\" (UID: \"fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.957824 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gk6st"] Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.962245 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4tqj\" (UniqueName: \"kubernetes.io/projected/b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a-kube-api-access-t4tqj\") pod \"authentication-operator-69f744f599-pj7mv\" (UID: \"b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.981564 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxl8c\" (UniqueName: \"kubernetes.io/projected/1c5ca023-fc82-4365-b2f9-f57220013a9f-kube-api-access-qxl8c\") pod \"cluster-samples-operator-665b6dd947-bldhq\" (UID: \"1c5ca023-fc82-4365-b2f9-f57220013a9f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bldhq" Feb 18 19:36:17 crc kubenswrapper[4932]: I0218 19:36:17.995496 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hphc8"] Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.000818 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62pvb\" (UniqueName: \"kubernetes.io/projected/9a7e80fe-b260-461e-a11b-633a14eb304d-kube-api-access-62pvb\") pod \"openshift-config-operator-7777fb866f-gvnf8\" (UID: \"9a7e80fe-b260-461e-a11b-633a14eb304d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8" Feb 18 19:36:18 crc kubenswrapper[4932]: W0218 19:36:18.014609 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a072d2a_dd0d_4fe3_a7d2_f5baaa9df95e.slice/crio-37dfb69bdbece6f1263e97ad0156a7fb06c297ea73fbcc0afa382c6e4704d527 WatchSource:0}: Error finding container 37dfb69bdbece6f1263e97ad0156a7fb06c297ea73fbcc0afa382c6e4704d527: Status 404 returned error can't find the container with id 37dfb69bdbece6f1263e97ad0156a7fb06c297ea73fbcc0afa382c6e4704d527 Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.015160 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l6wf\" (UniqueName: \"kubernetes.io/projected/47777b7a-7599-4366-8e0f-a2ddf382e6ef-kube-api-access-2l6wf\") pod \"cluster-image-registry-operator-dc59b4c8b-6xmms\" (UID: \"47777b7a-7599-4366-8e0f-a2ddf382e6ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.023008 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gk6st" event={"ID":"09b62af0-116d-4918-a691-e7040fd7dc22","Type":"ContainerStarted","Data":"a1b9ddb4e29529281b7db75aec531d1deedee40e13771990e37f0211d1d80b71"} Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.024107 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" event={"ID":"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e","Type":"ContainerStarted","Data":"37dfb69bdbece6f1263e97ad0156a7fb06c297ea73fbcc0afa382c6e4704d527"} Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.030426 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:18 crc kubenswrapper[4932]: E0218 19:36:18.030887 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:18.53086812 +0000 UTC m=+142.112822955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.036383 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfbq9\" (UniqueName: \"kubernetes.io/projected/3f42d0c9-6a6b-42c2-8caf-87afbe45c75b-kube-api-access-bfbq9\") pod \"machine-approver-56656f9798-sjnpq\" (UID: \"3f42d0c9-6a6b-42c2-8caf-87afbe45c75b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.060889 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvqqv\" (UniqueName: \"kubernetes.io/projected/c7dff6ec-6703-40fb-a94a-c1d8b4641703-kube-api-access-bvqqv\") pod \"openshift-apiserver-operator-796bbdcf4f-qgjzj\" (UID: \"c7dff6ec-6703-40fb-a94a-c1d8b4641703\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qgjzj" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.070689 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.073313 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-cn2nc"] Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.073345 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.079036 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mg62\" (UniqueName: \"kubernetes.io/projected/18e44919-11c5-4974-9c71-ff803e668247-kube-api-access-7mg62\") pod \"controller-manager-879f6c89f-gkgsj\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.098691 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h82qp\" (UniqueName: \"kubernetes.io/projected/d939dc03-30d6-4839-abd8-1d8d1bbf8cad-kube-api-access-h82qp\") pod \"console-operator-58897d9998-f874p\" (UID: \"d939dc03-30d6-4839-abd8-1d8d1bbf8cad\") " pod="openshift-console-operator/console-operator-58897d9998-f874p" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.105706 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.113954 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2lqf\" (UniqueName: \"kubernetes.io/projected/28fd23a7-1b44-440f-be4a-8c236cf8902b-kube-api-access-b2lqf\") pod \"route-controller-manager-6576b87f9c-cnq5q\" (UID: \"28fd23a7-1b44-440f-be4a-8c236cf8902b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.125536 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bldhq" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.130934 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:18 crc kubenswrapper[4932]: E0218 19:36:18.131120 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:18.631090394 +0000 UTC m=+142.213045249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.131580 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:18 crc kubenswrapper[4932]: E0218 19:36:18.132043 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:18.632036035 +0000 UTC m=+142.213990880 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.132860 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.135261 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmwdb\" (UniqueName: \"kubernetes.io/projected/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-kube-api-access-kmwdb\") pod \"oauth-openshift-558db77b4-xnxl9\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.158807 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/47777b7a-7599-4366-8e0f-a2ddf382e6ef-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-6xmms\" (UID: \"47777b7a-7599-4366-8e0f-a2ddf382e6ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.175616 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-279w9\" (UniqueName: \"kubernetes.io/projected/7a63a8af-95ca-447b-9bfa-7aec1033c0b3-kube-api-access-279w9\") pod \"apiserver-76f77b778f-jr49c\" (UID: \"7a63a8af-95ca-447b-9bfa-7aec1033c0b3\") " pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.195810 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-bound-sa-token\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.214992 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxwvv\" (UniqueName: \"kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-kube-api-access-kxwvv\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.236774 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:18 crc kubenswrapper[4932]: E0218 19:36:18.237442 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:18.737418805 +0000 UTC m=+142.319373650 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.237540 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.237590 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx42h\" (UniqueName: \"kubernetes.io/projected/f9f46b79-f300-42de-a2c3-a35670822a3b-kube-api-access-mx42h\") pod \"console-f9d7485db-fgjll\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.263900 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm4fr\" (UniqueName: \"kubernetes.io/projected/26869f13-c7ee-411c-85a1-72338142184c-kube-api-access-gm4fr\") pod \"apiserver-7bbb656c7d-z2jc5\" (UID: \"26869f13-c7ee-411c-85a1-72338142184c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.278304 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftj25\" (UniqueName: \"kubernetes.io/projected/522d227a-c827-415e-9e8b-e5907ba83363-kube-api-access-ftj25\") pod \"router-default-5444994796-8xrbm\" (UID: \"522d227a-c827-415e-9e8b-e5907ba83363\") " pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.282249 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.289762 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.291459 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pj7mv"] Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.297215 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stgpm\" (UniqueName: \"kubernetes.io/projected/81931b41-8917-4936-9e02-52f7c8c0f1c1-kube-api-access-stgpm\") pod \"olm-operator-6b444d44fb-zjx26\" (UID: \"81931b41-8917-4936-9e02-52f7c8c0f1c1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.298961 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.308568 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.318279 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c04fd14-9dfc-4c0f-8125-8663eac51a45-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9q42v\" (UID: \"4c04fd14-9dfc-4c0f-8125-8663eac51a45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9q42v" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.349069 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:18 crc kubenswrapper[4932]: E0218 19:36:18.349562 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:18.849547874 +0000 UTC m=+142.431502719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.350089 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.351422 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qgjzj" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.368657 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-f874p" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.371282 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04a7bf0c-8c31-4401-b3db-4b5168a0cac7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bch48\" (UID: \"04a7bf0c-8c31-4401-b3db-4b5168a0cac7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.378386 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwssc\" (UniqueName: \"kubernetes.io/projected/547cf2c3-4842-4d4e-ac24-8b2b1ec93a15-kube-api-access-xwssc\") pod \"package-server-manager-789f6589d5-xfmpj\" (UID: \"547cf2c3-4842-4d4e-ac24-8b2b1ec93a15\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfmpj" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.397519 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.397916 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctkmv\" (UniqueName: \"kubernetes.io/projected/2c6e703e-85e3-4d17-a946-c17e42c27985-kube-api-access-ctkmv\") pod \"control-plane-machine-set-operator-78cbb6b69f-jxmcb\" (UID: \"2c6e703e-85e3-4d17-a946-c17e42c27985\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxmcb" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.421843 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6bvz\" (UniqueName: \"kubernetes.io/projected/6fc8d511-a907-4f74-9a1c-e262d684b6a5-kube-api-access-q6bvz\") pod \"machine-config-operator-74547568cd-zfbtq\" (UID: \"6fc8d511-a907-4f74-9a1c-e262d684b6a5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.432453 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95t2r\" (UniqueName: \"kubernetes.io/projected/e6399c54-0b37-424f-8535-f8b0ab33ff52-kube-api-access-95t2r\") pod \"csi-hostpathplugin-dpln6\" (UID: \"e6399c54-0b37-424f-8535-f8b0ab33ff52\") " pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.438909 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbggm\" (UniqueName: \"kubernetes.io/projected/aa8e769a-613b-40f2-9d07-b034d7871302-kube-api-access-sbggm\") pod \"multus-admission-controller-857f4d67dd-nzrr6\" (UID: \"aa8e769a-613b-40f2-9d07-b034d7871302\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nzrr6" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.455448 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:18 crc kubenswrapper[4932]: E0218 19:36:18.455959 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:18.955944326 +0000 UTC m=+142.537899171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.459691 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.462231 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5ksr\" (UniqueName: \"kubernetes.io/projected/998697c8-1e0d-46ae-b92f-ae8faf0faef5-kube-api-access-n5ksr\") pod \"machine-config-controller-84d6567774-845v8\" (UID: \"998697c8-1e0d-46ae-b92f-ae8faf0faef5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-845v8" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.473459 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.475366 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-g2qvz"] Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.483617 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdlw8\" (UniqueName: \"kubernetes.io/projected/581a9ff6-cf7b-4bac-bd81-41c6fb080f36-kube-api-access-sdlw8\") pod \"machine-config-server-jsz8m\" (UID: \"581a9ff6-cf7b-4bac-bd81-41c6fb080f36\") " pod="openshift-machine-config-operator/machine-config-server-jsz8m" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.489726 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.503506 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3710240-88d7-4611-bd77-6de0c54c1e3c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-n288z\" (UID: \"c3710240-88d7-4611-bd77-6de0c54c1e3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n288z" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.512316 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8"] Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.514117 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-nzrr6" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.517793 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-496qr\" (UniqueName: \"kubernetes.io/projected/048e17bc-05bf-40e4-9f40-87d936fcf772-kube-api-access-496qr\") pod \"collect-profiles-29524050-46gfc\" (UID: \"048e17bc-05bf-40e4-9f40-87d936fcf772\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.518738 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bldhq"] Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.522868 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.536773 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9q42v" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.539846 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48qgh\" (UniqueName: \"kubernetes.io/projected/3f0021b0-4c6c-4085-9819-5c94471f320c-kube-api-access-48qgh\") pod \"kube-storage-version-migrator-operator-b67b599dd-xt96f\" (UID: \"3f0021b0-4c6c-4085-9819-5c94471f320c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xt96f" Feb 18 19:36:18 crc kubenswrapper[4932]: W0218 19:36:18.545487 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcbb6fa7_ef01_48aa_8ac8_ba4bb47d1ffc.slice/crio-f8f2d09121f9073bf68d4254eaf605a1ee162ce9c89249824fad9065e6c0c34a WatchSource:0}: Error finding container f8f2d09121f9073bf68d4254eaf605a1ee162ce9c89249824fad9065e6c0c34a: Status 404 returned error can't find the container with id f8f2d09121f9073bf68d4254eaf605a1ee162ce9c89249824fad9065e6c0c34a Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.552956 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-845v8" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.554339 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q"] Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.566445 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxmcb" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.567006 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xt96f" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.584707 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fccl8\" (UniqueName: \"kubernetes.io/projected/0a167a2c-fdc1-4d22-83b7-f1a63ab147bc-kube-api-access-fccl8\") pod \"dns-operator-744455d44c-vqskh\" (UID: \"0a167a2c-fdc1-4d22-83b7-f1a63ab147bc\") " pod="openshift-dns-operator/dns-operator-744455d44c-vqskh" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.586114 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fccl8\" (UniqueName: \"kubernetes.io/projected/0a167a2c-fdc1-4d22-83b7-f1a63ab147bc-kube-api-access-fccl8\") pod \"dns-operator-744455d44c-vqskh\" (UID: \"0a167a2c-fdc1-4d22-83b7-f1a63ab147bc\") " pod="openshift-dns-operator/dns-operator-744455d44c-vqskh" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.586438 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:18 crc kubenswrapper[4932]: E0218 19:36:18.589482 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:19.089305298 +0000 UTC m=+142.671260143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.590524 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fccl8\" (UniqueName: \"kubernetes.io/projected/0a167a2c-fdc1-4d22-83b7-f1a63ab147bc-kube-api-access-fccl8\") pod \"dns-operator-744455d44c-vqskh\" (UID: \"0a167a2c-fdc1-4d22-83b7-f1a63ab147bc\") " pod="openshift-dns-operator/dns-operator-744455d44c-vqskh" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.593685 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfmpj" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.597462 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gkgsj"] Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.602870 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jsz8m" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.607306 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5ghj\" (UniqueName: \"kubernetes.io/projected/715b331b-b140-461c-9a06-ba6ede3af8b6-kube-api-access-d5ghj\") pod \"packageserver-d55dfcdfc-tcbfq\" (UID: \"715b331b-b140-461c-9a06-ba6ede3af8b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.611737 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h9pm\" (UniqueName: \"kubernetes.io/projected/6ed62cdb-a7e1-4366-88b7-7c2ed1102203-kube-api-access-7h9pm\") pod \"migrator-59844c95c7-z8hql\" (UID: \"6ed62cdb-a7e1-4366-88b7-7c2ed1102203\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z8hql" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.619372 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-dpln6" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.633604 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh64w\" (UniqueName: \"kubernetes.io/projected/bd39f7e2-211c-4104-a72d-5374a6e95ee1-kube-api-access-nh64w\") pod \"catalog-operator-68c6474976-pkfx8\" (UID: \"bd39f7e2-211c-4104-a72d-5374a6e95ee1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.642098 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbm4m\" (UniqueName: \"kubernetes.io/projected/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-kube-api-access-rbm4m\") pod \"marketplace-operator-79b997595-5c79p\" (UID: \"e39708f9-5d2d-4ed5-9243-7b71ef470ca7\") " pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.660209 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vjgr\" (UniqueName: \"kubernetes.io/projected/908b160b-0e48-4c2c-a35b-45fe25ca093f-kube-api-access-6vjgr\") pod \"dns-default-nqdfv\" (UID: \"908b160b-0e48-4c2c-a35b-45fe25ca093f\") " pod="openshift-dns/dns-default-nqdfv" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.688265 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:18 crc kubenswrapper[4932]: E0218 19:36:18.688658 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:19.188642833 +0000 UTC m=+142.770597678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.694366 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f9pv\" (UniqueName: \"kubernetes.io/projected/04a7bf0c-8c31-4401-b3db-4b5168a0cac7-kube-api-access-9f9pv\") pod \"ingress-operator-5b745b69d9-bch48\" (UID: \"04a7bf0c-8c31-4401-b3db-4b5168a0cac7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.705782 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vhpt\" (UniqueName: \"kubernetes.io/projected/bac9c1de-1cfe-48d3-aafc-ddb41647c661-kube-api-access-8vhpt\") pod \"ingress-canary-rmh4d\" (UID: \"bac9c1de-1cfe-48d3-aafc-ddb41647c661\") " pod="openshift-ingress-canary/ingress-canary-rmh4d" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.726711 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prh8n\" (UniqueName: \"kubernetes.io/projected/93bf45fc-6447-479a-83d0-c9418ecb8270-kube-api-access-prh8n\") pod \"service-ca-operator-777779d784-t7f9j\" (UID: \"93bf45fc-6447-479a-83d0-c9418ecb8270\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-t7f9j" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.742695 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkzpp\" (UniqueName: \"kubernetes.io/projected/df2da7f7-2427-4099-ba40-855a7e850256-kube-api-access-xkzpp\") pod \"service-ca-9c57cc56f-jx49r\" (UID: \"df2da7f7-2427-4099-ba40-855a7e850256\") " pod="openshift-service-ca/service-ca-9c57cc56f-jx49r" Feb 18 19:36:18 crc kubenswrapper[4932]: W0218 19:36:18.742868 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18e44919_11c5_4974_9c71_ff803e668247.slice/crio-aa724fc4a2394799ac8478df313683148bbb44ac563a7fa7a5bf6e498abd0bc7 WatchSource:0}: Error finding container aa724fc4a2394799ac8478df313683148bbb44ac563a7fa7a5bf6e498abd0bc7: Status 404 returned error can't find the container with id aa724fc4a2394799ac8478df313683148bbb44ac563a7fa7a5bf6e498abd0bc7 Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.745986 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n288z" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.753267 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-vqskh" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.769937 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/74a8d999-1731-4a72-8ca8-25913744a8e7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ng7nn\" (UID: \"74a8d999-1731-4a72-8ca8-25913744a8e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ng7nn" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.788729 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.791996 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:18 crc kubenswrapper[4932]: E0218 19:36:18.792787 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:19.292765734 +0000 UTC m=+142.874720589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.809510 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-t7f9j" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.831024 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z8hql" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.844610 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.871504 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-jx49r" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.876445 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.889135 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-nqdfv" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.894273 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:18 crc kubenswrapper[4932]: E0218 19:36:18.894458 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:19.39443263 +0000 UTC m=+142.976387475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.894680 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:18 crc kubenswrapper[4932]: E0218 19:36:18.895138 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:19.395130595 +0000 UTC m=+142.977085430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.896299 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rmh4d" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.954326 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.997937 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:18 crc kubenswrapper[4932]: E0218 19:36:18.998087 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:19.498045129 +0000 UTC m=+143.079999974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:18 crc kubenswrapper[4932]: I0218 19:36:18.998447 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:18 crc kubenswrapper[4932]: E0218 19:36:18.998848 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:19.498829017 +0000 UTC m=+143.080783862 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.003495 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jr49c"] Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.010990 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fgjll"] Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.030499 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xnxl9"] Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.032518 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-8xrbm" event={"ID":"522d227a-c827-415e-9e8b-e5907ba83363","Type":"ContainerStarted","Data":"c19059a5247644dfb6f6673b50a243ae81f65e5ead5c8b30eb3d1f15b80a72b8"} Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.034859 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" event={"ID":"3f42d0c9-6a6b-42c2-8caf-87afbe45c75b","Type":"ContainerStarted","Data":"9c58cdc919b520b7f14ab596c4bccc96d30d431bc8ff152393a702a7d052edb3"} Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.035882 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8" event={"ID":"9a7e80fe-b260-461e-a11b-633a14eb304d","Type":"ContainerStarted","Data":"312f3f14ea087659b2bafcf65b0d3e93238060becb67f2c1bc28e39bfb82c2d5"} Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.037511 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" event={"ID":"b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a","Type":"ContainerStarted","Data":"2f6d8ca4a742b788eba554b61e535dd067b567bf7163e507a0bd42a1e40a120e"} Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.037557 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" event={"ID":"b9dffd61-e241-4aa6-9a3e-cd5ea9abd18a","Type":"ContainerStarted","Data":"b836577709b6122069d93f05915cfe6784ba1b6d249407c38ab2b2e650fd914d"} Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.046481 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" event={"ID":"8a072d2a-dd0d-4fe3-a7d2-f5baaa9df95e","Type":"ContainerStarted","Data":"b3050cbcf8c7bdd94e967767973a040ad00d2a27262f0f0929d358af15295afd"} Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.050803 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" event={"ID":"fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc","Type":"ContainerStarted","Data":"f8f2d09121f9073bf68d4254eaf605a1ee162ce9c89249824fad9065e6c0c34a"} Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.052681 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" event={"ID":"28fd23a7-1b44-440f-be4a-8c236cf8902b","Type":"ContainerStarted","Data":"7444c4d3cedc79cae24f1e017b9fa1b3385d64a4dc475008ab7f7a213fdab561"} Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.054849 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gk6st" event={"ID":"09b62af0-116d-4918-a691-e7040fd7dc22","Type":"ContainerStarted","Data":"3cd379069faa7198662e37cd80ce297bf474753981df30f92eca5bcc49bd1703"} Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.059763 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" event={"ID":"18e44919-11c5-4974-9c71-ff803e668247","Type":"ContainerStarted","Data":"aa724fc4a2394799ac8478df313683148bbb44ac563a7fa7a5bf6e498abd0bc7"} Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.062028 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jsz8m" event={"ID":"581a9ff6-cf7b-4bac-bd81-41c6fb080f36","Type":"ContainerStarted","Data":"c43c48a5c0c4932771fe306dfce5d7e2345370d619671714a1bb357bd1e97e73"} Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.063165 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-cn2nc" event={"ID":"d75d91b3-7800-4645-b272-768f9d02f81b","Type":"ContainerStarted","Data":"cab4a223ec7f156131206c10d378ba9415f29c4c714e115bedc13aa7d4ccf4f7"} Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.063203 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-cn2nc" event={"ID":"d75d91b3-7800-4645-b272-768f9d02f81b","Type":"ContainerStarted","Data":"199659edc1e3b267f89184c2e65fda9d42bc658e582217b6947492d14b691cd4"} Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.063620 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-cn2nc" Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.064313 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bldhq" event={"ID":"1c5ca023-fc82-4365-b2f9-f57220013a9f","Type":"ContainerStarted","Data":"928be0de59e62712d1016992d5448bf8b41eeb18ffaf06107fcdf7dc628a218a"} Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.065774 4932 patch_prober.go:28] interesting pod/downloads-7954f5f757-cn2nc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.065815 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cn2nc" podUID="d75d91b3-7800-4645-b272-768f9d02f81b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.067139 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ng7nn" Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.099269 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:19 crc kubenswrapper[4932]: E0218 19:36:19.099872 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:19.599842668 +0000 UTC m=+143.181797513 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.201092 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:19 crc kubenswrapper[4932]: E0218 19:36:19.202948 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:19.702935287 +0000 UTC m=+143.284890232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.304989 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:19 crc kubenswrapper[4932]: E0218 19:36:19.305784 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:19.805768089 +0000 UTC m=+143.387722934 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.407050 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:19 crc kubenswrapper[4932]: E0218 19:36:19.408218 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:19.908203231 +0000 UTC m=+143.490158076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.510111 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:19 crc kubenswrapper[4932]: E0218 19:36:19.512135 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:20.012098437 +0000 UTC m=+143.594053282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.611733 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:19 crc kubenswrapper[4932]: E0218 19:36:19.612236 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:20.112218909 +0000 UTC m=+143.694173754 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.712641 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:19 crc kubenswrapper[4932]: E0218 19:36:19.713102 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:20.213084957 +0000 UTC m=+143.795039802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.782736 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-hphc8" podStartSLOduration=119.782716929 podStartE2EDuration="1m59.782716929s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:19.750303717 +0000 UTC m=+143.332258572" watchObservedRunningTime="2026-02-18 19:36:19.782716929 +0000 UTC m=+143.364671774" Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.814188 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:19 crc kubenswrapper[4932]: E0218 19:36:19.814673 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:20.314661701 +0000 UTC m=+143.896616546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.853971 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gk6st" podStartSLOduration=119.853955747 podStartE2EDuration="1m59.853955747s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:19.85364718 +0000 UTC m=+143.435602045" watchObservedRunningTime="2026-02-18 19:36:19.853955747 +0000 UTC m=+143.435910592" Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.920899 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:19 crc kubenswrapper[4932]: E0218 19:36:19.921158 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:20.421134255 +0000 UTC m=+144.003089100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:19 crc kubenswrapper[4932]: I0218 19:36:19.921748 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:19 crc kubenswrapper[4932]: E0218 19:36:19.922329 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:20.422321851 +0000 UTC m=+144.004276696 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.027421 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:20 crc kubenswrapper[4932]: E0218 19:36:20.027727 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:20.52771262 +0000 UTC m=+144.109667465 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.031491 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms"] Feb 18 19:36:20 crc kubenswrapper[4932]: W0218 19:36:20.040879 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47777b7a_7599_4366_8e0f_a2ddf382e6ef.slice/crio-4fef3ca7d86fdecc1848945781c5b35bf05924569e6becee4d9139e644b17c11 WatchSource:0}: Error finding container 4fef3ca7d86fdecc1848945781c5b35bf05924569e6becee4d9139e644b17c11: Status 404 returned error can't find the container with id 4fef3ca7d86fdecc1848945781c5b35bf05924569e6becee4d9139e644b17c11 Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.051384 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qgjzj"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.083379 4932 generic.go:334] "Generic (PLEG): container finished" podID="9a7e80fe-b260-461e-a11b-633a14eb304d" containerID="90d5be797db248d2640057ff04b30de72a2804fef0c0b456b194cc9f8c67977e" exitCode=0 Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.083822 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8" event={"ID":"9a7e80fe-b260-461e-a11b-633a14eb304d","Type":"ContainerDied","Data":"90d5be797db248d2640057ff04b30de72a2804fef0c0b456b194cc9f8c67977e"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.119460 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jsz8m" event={"ID":"581a9ff6-cf7b-4bac-bd81-41c6fb080f36","Type":"ContainerStarted","Data":"ad441ed7a35730cadd8e8588e7a6cfffb9f3bb89aaef56283fd32a80054565e1"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.121951 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" event={"ID":"215a0eae-8c5b-4b0e-86f6-056bc6f696ff","Type":"ContainerStarted","Data":"6382a3d82fd4779d69e56bae634baaed056f7a56ccaabda7fcfd83e4fe75fc34"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.121995 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" event={"ID":"215a0eae-8c5b-4b0e-86f6-056bc6f696ff","Type":"ContainerStarted","Data":"62838236ab987cac95945631bbd754af35252c7b859d7a4d83e36fd02b26a5f7"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.122950 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.124677 4932 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-xnxl9 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" start-of-body= Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.124715 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" podUID="215a0eae-8c5b-4b0e-86f6-056bc6f696ff" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.129002 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:20 crc kubenswrapper[4932]: E0218 19:36:20.129324 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:20.629312965 +0000 UTC m=+144.211267810 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.136317 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-8xrbm" event={"ID":"522d227a-c827-415e-9e8b-e5907ba83363","Type":"ContainerStarted","Data":"1ae97dc99305a261499f98c429df93a066a975f5e922cc42247e61773c47e815"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.138782 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms" event={"ID":"47777b7a-7599-4366-8e0f-a2ddf382e6ef","Type":"ContainerStarted","Data":"4fef3ca7d86fdecc1848945781c5b35bf05924569e6becee4d9139e644b17c11"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.143100 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" event={"ID":"18e44919-11c5-4974-9c71-ff803e668247","Type":"ContainerStarted","Data":"92c4e7fb68e8f7dfb6986ed0cee4d733efb5ba5235fa8329b6cb5754629a9a84"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.150236 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.156934 4932 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-gkgsj container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.156998 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" podUID="18e44919-11c5-4974-9c71-ff803e668247" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.159689 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" event={"ID":"3f42d0c9-6a6b-42c2-8caf-87afbe45c75b","Type":"ContainerStarted","Data":"b4459f764e16527c18a51250cce5e306ee30ac9825f851a1c5c2d0d62885ff09"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.159720 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" event={"ID":"3f42d0c9-6a6b-42c2-8caf-87afbe45c75b","Type":"ContainerStarted","Data":"9aa902ec21846d5b8a2bdc23ea50f55f410c73ae16b4d511a2392d4478696a00"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.212942 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-pj7mv" podStartSLOduration=121.212921539 podStartE2EDuration="2m1.212921539s" podCreationTimestamp="2026-02-18 19:34:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:20.212478959 +0000 UTC m=+143.794433804" watchObservedRunningTime="2026-02-18 19:36:20.212921539 +0000 UTC m=+143.794876384" Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.214580 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-cn2nc" podStartSLOduration=120.214570495 podStartE2EDuration="2m0.214570495s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:20.189272142 +0000 UTC m=+143.771226987" watchObservedRunningTime="2026-02-18 19:36:20.214570495 +0000 UTC m=+143.796525340" Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.221507 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" event={"ID":"fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc","Type":"ContainerStarted","Data":"7ae75ec1437d0c88ac24b9a4a8b92017729c372e0745157e709e7ca5214ae506"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.221551 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" event={"ID":"fcbb6fa7-ef01-48aa-8ac8-ba4bb47d1ffc","Type":"ContainerStarted","Data":"067eecdb931c7ca9cb9ad3084b5746917be92d6e7400df1d520f86f7f2c84913"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.234469 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:20 crc kubenswrapper[4932]: E0218 19:36:20.257668 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:20.757640146 +0000 UTC m=+144.339594991 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.263843 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" event={"ID":"28fd23a7-1b44-440f-be4a-8c236cf8902b","Type":"ContainerStarted","Data":"34ae58b97ea4a3420f81b7dbc9be8a4d3eb79fc358a2e2df9dc60f04b8d15203"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.265350 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.281640 4932 generic.go:334] "Generic (PLEG): container finished" podID="7a63a8af-95ca-447b-9bfa-7aec1033c0b3" containerID="818fbc246e74599791c548f8ec9f674bd2cce62aac0db87936992de325d1643e" exitCode=0 Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.281762 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jr49c" event={"ID":"7a63a8af-95ca-447b-9bfa-7aec1033c0b3","Type":"ContainerDied","Data":"818fbc246e74599791c548f8ec9f674bd2cce62aac0db87936992de325d1643e"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.282109 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jr49c" event={"ID":"7a63a8af-95ca-447b-9bfa-7aec1033c0b3","Type":"ContainerStarted","Data":"e73beb7e6c2d61c347018ebdfd420613ee7cffe12afd787cca60e29db574a674"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.299092 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fgjll" event={"ID":"f9f46b79-f300-42de-a2c3-a35670822a3b","Type":"ContainerStarted","Data":"fe1dbe69549de48e45fb41bc63a2fced32334e7eea18ca7b7aa834f59f93d40c"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.299135 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fgjll" event={"ID":"f9f46b79-f300-42de-a2c3-a35670822a3b","Type":"ContainerStarted","Data":"a258bd567aafbecb3f6618d81a779cce26f985331e18b4b996cf0d535bef2a19"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.340200 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bldhq" event={"ID":"1c5ca023-fc82-4365-b2f9-f57220013a9f","Type":"ContainerStarted","Data":"93c2e90186a48188f20f6fe25a5f2948a1fa9b792d3dc9cc770299443b2c0f06"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.340249 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bldhq" event={"ID":"1c5ca023-fc82-4365-b2f9-f57220013a9f","Type":"ContainerStarted","Data":"d44f0518152106c3929a849bbc675996d033ea6cfa57c5c9e31da9fb8353aa55"} Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.342756 4932 patch_prober.go:28] interesting pod/downloads-7954f5f757-cn2nc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.342801 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cn2nc" podUID="d75d91b3-7800-4645-b272-768f9d02f81b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.360045 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:20 crc kubenswrapper[4932]: E0218 19:36:20.361356 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:20.861322007 +0000 UTC m=+144.443276852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.461068 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.461444 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.462249 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:20 crc kubenswrapper[4932]: E0218 19:36:20.463953 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:20.963937604 +0000 UTC m=+144.545892449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.466578 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.476705 4932 patch_prober.go:28] interesting pod/router-default-5444994796-8xrbm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:36:20 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Feb 18 19:36:20 crc kubenswrapper[4932]: [+]process-running ok Feb 18 19:36:20 crc kubenswrapper[4932]: healthz check failed Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.476764 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8xrbm" podUID="522d227a-c827-415e-9e8b-e5907ba83363" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.479952 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dpln6"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.481946 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5"] Feb 18 19:36:20 crc kubenswrapper[4932]: E0218 19:36:20.564393 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:21.064382633 +0000 UTC m=+144.646337478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.564166 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.616380 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjnpq" podStartSLOduration=121.616363892 podStartE2EDuration="2m1.616363892s" podCreationTimestamp="2026-02-18 19:34:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:20.614149562 +0000 UTC m=+144.196104397" watchObservedRunningTime="2026-02-18 19:36:20.616363892 +0000 UTC m=+144.198318737" Feb 18 19:36:20 crc kubenswrapper[4932]: W0218 19:36:20.639868 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod048e17bc_05bf_40e4_9f40_87d936fcf772.slice/crio-9614a2e59be43910c023c657f9503a561993abaeac9a3f60c668134e54e7399b WatchSource:0}: Error finding container 9614a2e59be43910c023c657f9503a561993abaeac9a3f60c668134e54e7399b: Status 404 returned error can't find the container with id 9614a2e59be43910c023c657f9503a561993abaeac9a3f60c668134e54e7399b Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.662241 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-845v8"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.665687 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:20 crc kubenswrapper[4932]: E0218 19:36:20.666138 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:21.166124391 +0000 UTC m=+144.748079236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.673353 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xt96f"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.705984 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.706427 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.738718 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.767108 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:20 crc kubenswrapper[4932]: E0218 19:36:20.767505 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:21.267493671 +0000 UTC m=+144.849448516 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.768856 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-f874p"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.791862 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.795032 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-z8hql"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.806670 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxmcb"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.828863 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfmpj"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.851889 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nzrr6"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.851929 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5c79p"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.855974 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-nqdfv"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.860690 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9q42v"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.860903 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n288z"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.862018 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-jsz8m" podStartSLOduration=5.862006257 podStartE2EDuration="5.862006257s" podCreationTimestamp="2026-02-18 19:36:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:20.6835864 +0000 UTC m=+144.265541245" watchObservedRunningTime="2026-02-18 19:36:20.862006257 +0000 UTC m=+144.443961102" Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.864698 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-8xrbm" podStartSLOduration=120.864688997 podStartE2EDuration="2m0.864688997s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:20.730653759 +0000 UTC m=+144.312608604" watchObservedRunningTime="2026-02-18 19:36:20.864688997 +0000 UTC m=+144.446643842" Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.867346 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bch48"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.868588 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" podStartSLOduration=121.868576664 podStartE2EDuration="2m1.868576664s" podCreationTimestamp="2026-02-18 19:34:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:20.76208199 +0000 UTC m=+144.344036845" watchObservedRunningTime="2026-02-18 19:36:20.868576664 +0000 UTC m=+144.450531529" Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.870082 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:20 crc kubenswrapper[4932]: E0218 19:36:20.870331 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:21.370315793 +0000 UTC m=+144.952270638 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.870479 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:20 crc kubenswrapper[4932]: E0218 19:36:20.870810 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:21.370799063 +0000 UTC m=+144.952753908 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.873030 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-t7f9j"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.874990 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rmh4d"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.878812 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ng7nn"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.882226 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-jx49r"] Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.882838 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-fgjll" podStartSLOduration=120.882826331 podStartE2EDuration="2m0.882826331s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:20.834399702 +0000 UTC m=+144.416354557" watchObservedRunningTime="2026-02-18 19:36:20.882826331 +0000 UTC m=+144.464781176" Feb 18 19:36:20 crc kubenswrapper[4932]: W0218 19:36:20.892645 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf2da7f7_2427_4099_ba40_855a7e850256.slice/crio-38bb13d0589766f79a59074443f18efa3f14ba1022c535a62ad5eca63fd5cb7e WatchSource:0}: Error finding container 38bb13d0589766f79a59074443f18efa3f14ba1022c535a62ad5eca63fd5cb7e: Status 404 returned error can't find the container with id 38bb13d0589766f79a59074443f18efa3f14ba1022c535a62ad5eca63fd5cb7e Feb 18 19:36:20 crc kubenswrapper[4932]: W0218 19:36:20.892820 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74a8d999_1731_4a72_8ca8_25913744a8e7.slice/crio-6a8b444cb4f234126f415b9435292e94a5ece6b61cf6c10044b8bcb71a4e78c3 WatchSource:0}: Error finding container 6a8b444cb4f234126f415b9435292e94a5ece6b61cf6c10044b8bcb71a4e78c3: Status 404 returned error can't find the container with id 6a8b444cb4f234126f415b9435292e94a5ece6b61cf6c10044b8bcb71a4e78c3 Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.893115 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vqskh"] Feb 18 19:36:20 crc kubenswrapper[4932]: W0218 19:36:20.968266 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a167a2c_fdc1_4d22_83b7_f1a63ab147bc.slice/crio-ef51d72404138c0fc9dbd2aa144ab1924b6edc15d9d24221f507fe166cce6dfe WatchSource:0}: Error finding container ef51d72404138c0fc9dbd2aa144ab1924b6edc15d9d24221f507fe166cce6dfe: Status 404 returned error can't find the container with id ef51d72404138c0fc9dbd2aa144ab1924b6edc15d9d24221f507fe166cce6dfe Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.970912 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:20 crc kubenswrapper[4932]: E0218 19:36:20.971286 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:21.471271113 +0000 UTC m=+145.053225958 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:20 crc kubenswrapper[4932]: I0218 19:36:20.992041 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" podStartSLOduration=120.992022115 podStartE2EDuration="2m0.992022115s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:20.936699592 +0000 UTC m=+144.518654437" watchObservedRunningTime="2026-02-18 19:36:20.992022115 +0000 UTC m=+144.573976960" Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.072452 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:21 crc kubenswrapper[4932]: E0218 19:36:21.072783 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:21.572773265 +0000 UTC m=+145.154728110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.078649 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" podStartSLOduration=121.078630606 podStartE2EDuration="2m1.078630606s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:20.993605961 +0000 UTC m=+144.575560806" watchObservedRunningTime="2026-02-18 19:36:21.078630606 +0000 UTC m=+144.660585451" Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.079310 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bldhq" podStartSLOduration=121.079304121 podStartE2EDuration="2m1.079304121s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:21.070601387 +0000 UTC m=+144.652556232" watchObservedRunningTime="2026-02-18 19:36:21.079304121 +0000 UTC m=+144.661258966" Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.117088 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-g2qvz" podStartSLOduration=121.117069953 podStartE2EDuration="2m1.117069953s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:21.10978394 +0000 UTC m=+144.691738785" watchObservedRunningTime="2026-02-18 19:36:21.117069953 +0000 UTC m=+144.699024798" Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.117362 4932 csr.go:261] certificate signing request csr-lz5lj is approved, waiting to be issued Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.144066 4932 csr.go:257] certificate signing request csr-lz5lj is issued Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.174553 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:21 crc kubenswrapper[4932]: E0218 19:36:21.176874 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:21.676849215 +0000 UTC m=+145.258804060 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.276387 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:21 crc kubenswrapper[4932]: E0218 19:36:21.276725 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:21.776713361 +0000 UTC m=+145.358668206 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.345110 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n288z" event={"ID":"c3710240-88d7-4611-bd77-6de0c54c1e3c","Type":"ContainerStarted","Data":"eb29a1953f8ef555b1645669da6a90d8f070cd48aeada6ce1b6c77371e4952fd"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.346660 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfmpj" event={"ID":"547cf2c3-4842-4d4e-ac24-8b2b1ec93a15","Type":"ContainerStarted","Data":"51f002d9b6f9101211dae4ede1845c3983a19da3a4901e450b150f11021b724f"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.346687 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfmpj" event={"ID":"547cf2c3-4842-4d4e-ac24-8b2b1ec93a15","Type":"ContainerStarted","Data":"e9b7f7db759fb788ae65c135e5d07ac1fe23f116dc3d863c923b98994c14a0f8"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.348065 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" event={"ID":"048e17bc-05bf-40e4-9f40-87d936fcf772","Type":"ContainerStarted","Data":"67de493045dcef40d7dcd7366beacc478832b3155bced8f9164fd20b4a4dc42d"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.348091 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" event={"ID":"048e17bc-05bf-40e4-9f40-87d936fcf772","Type":"ContainerStarted","Data":"9614a2e59be43910c023c657f9503a561993abaeac9a3f60c668134e54e7399b"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.349441 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26" event={"ID":"81931b41-8917-4936-9e02-52f7c8c0f1c1","Type":"ContainerStarted","Data":"cc249f9154cfabcbe9700db247dc1b8fb0b1e7b7a0b0243940c7bdc27cf8f09e"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.349464 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26" event={"ID":"81931b41-8917-4936-9e02-52f7c8c0f1c1","Type":"ContainerStarted","Data":"dc3bfd6e6b4590f4e880e27fa3a01f953cefdcfa61b4b478074f887d4e12d642"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.350103 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26" Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.352396 4932 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-zjx26 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.352449 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26" podUID="81931b41-8917-4936-9e02-52f7c8c0f1c1" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.354925 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rmh4d" event={"ID":"bac9c1de-1cfe-48d3-aafc-ddb41647c661","Type":"ContainerStarted","Data":"dc2a9f0c56c83f0657d3cc25cdee9c7edcea7a640a07a72d2b8b85fb5fe4ce90"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.358814 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" event={"ID":"26869f13-c7ee-411c-85a1-72338142184c","Type":"ContainerStarted","Data":"bb24a0e12402eddcf6cdc16832787a05797da7fa87b6bb5b59cf4835c2fe80cf"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.358849 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" event={"ID":"26869f13-c7ee-411c-85a1-72338142184c","Type":"ContainerStarted","Data":"9ea8b26360f7cb4dcd765564e77bb5d9c92035fcbeedf0218763d1c916f7bc0d"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.362015 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxmcb" event={"ID":"2c6e703e-85e3-4d17-a946-c17e42c27985","Type":"ContainerStarted","Data":"c1b266c1734fe90b1c784949370b87311a57272f865ea05fcc0aa1d68d48c4e1"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.362522 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" podStartSLOduration=122.362504894 podStartE2EDuration="2m2.362504894s" podCreationTimestamp="2026-02-18 19:34:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:21.360793476 +0000 UTC m=+144.942748341" watchObservedRunningTime="2026-02-18 19:36:21.362504894 +0000 UTC m=+144.944459739" Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.363925 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xt96f" event={"ID":"3f0021b0-4c6c-4085-9819-5c94471f320c","Type":"ContainerStarted","Data":"95f4f558a40a9cc2f0f1b1cbe2e98b450864ff1deded7a434e0390af6c580b32"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.364719 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-f874p" event={"ID":"d939dc03-30d6-4839-abd8-1d8d1bbf8cad","Type":"ContainerStarted","Data":"3eb6d2b9e329772d6d00fb4611281dcf058a09c4e0bfbdd08add4911a8bf4cda"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.365738 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nzrr6" event={"ID":"aa8e769a-613b-40f2-9d07-b034d7871302","Type":"ContainerStarted","Data":"218b49c938d8e80777791011277f66c62ea4373c727337751615007c487feb8e"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.369713 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms" event={"ID":"47777b7a-7599-4366-8e0f-a2ddf382e6ef","Type":"ContainerStarted","Data":"361b45f5dc274d18fba9f7f40ad8d679d6de6fc40c19a336e38eae5090e62165"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.371808 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-845v8" event={"ID":"998697c8-1e0d-46ae-b92f-ae8faf0faef5","Type":"ContainerStarted","Data":"4682983c599d9514be76c4f2cadaeac84cec527837fd00a542e7eb9ac6f4a200"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.371831 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-845v8" event={"ID":"998697c8-1e0d-46ae-b92f-ae8faf0faef5","Type":"ContainerStarted","Data":"9e3041f3e4758a990acc3d697f9d4013638759cd561e1b6028d938d4a7766d22"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.379741 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" event={"ID":"e39708f9-5d2d-4ed5-9243-7b71ef470ca7","Type":"ContainerStarted","Data":"784badddcd9797871fec35aacb4b375a077788de958864c50c207fa8ea3d3eb2"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.384617 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:21 crc kubenswrapper[4932]: E0218 19:36:21.384927 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:21.884912973 +0000 UTC m=+145.466867818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.402126 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ng7nn" event={"ID":"74a8d999-1731-4a72-8ca8-25913744a8e7","Type":"ContainerStarted","Data":"6a8b444cb4f234126f415b9435292e94a5ece6b61cf6c10044b8bcb71a4e78c3"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.413888 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" event={"ID":"6fc8d511-a907-4f74-9a1c-e262d684b6a5","Type":"ContainerStarted","Data":"3b68aa7d64e8bea74e172246fbc3f26c4b6a13c28a7a757a8cb98e8a05ee2db2"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.413927 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" event={"ID":"6fc8d511-a907-4f74-9a1c-e262d684b6a5","Type":"ContainerStarted","Data":"782a8daff7e4075ef0ee5a57ed51d3d652ff0f58ded3e121cc09fa5730661d7d"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.418275 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26" podStartSLOduration=121.418233026 podStartE2EDuration="2m1.418233026s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:21.382632572 +0000 UTC m=+144.964587417" watchObservedRunningTime="2026-02-18 19:36:21.418233026 +0000 UTC m=+145.000187881" Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.423204 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9q42v" event={"ID":"4c04fd14-9dfc-4c0f-8125-8663eac51a45","Type":"ContainerStarted","Data":"1f865959b496ca27e6078451adc1f895bdaec83d0b23f9d162aff86190c8636e"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.430073 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-t7f9j" event={"ID":"93bf45fc-6447-479a-83d0-c9418ecb8270","Type":"ContainerStarted","Data":"fed08d834c2e43a93dae61be4d645f3cba6fa54723a82749001fc1ec74880172"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.431426 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-jx49r" event={"ID":"df2da7f7-2427-4099-ba40-855a7e850256","Type":"ContainerStarted","Data":"38bb13d0589766f79a59074443f18efa3f14ba1022c535a62ad5eca63fd5cb7e"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.446261 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vqskh" event={"ID":"0a167a2c-fdc1-4d22-83b7-f1a63ab147bc","Type":"ContainerStarted","Data":"ef51d72404138c0fc9dbd2aa144ab1924b6edc15d9d24221f507fe166cce6dfe"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.467557 4932 patch_prober.go:28] interesting pod/router-default-5444994796-8xrbm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:36:21 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Feb 18 19:36:21 crc kubenswrapper[4932]: [+]process-running ok Feb 18 19:36:21 crc kubenswrapper[4932]: healthz check failed Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.467614 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8xrbm" podUID="522d227a-c827-415e-9e8b-e5907ba83363" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.472559 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" event={"ID":"bd39f7e2-211c-4104-a72d-5374a6e95ee1","Type":"ContainerStarted","Data":"342a46aed7fdc89c8cc4637447f1816bf65305c6d7857a37f7abafd5f11db868"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.485777 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qgjzj" event={"ID":"c7dff6ec-6703-40fb-a94a-c1d8b4641703","Type":"ContainerStarted","Data":"f7bee4b1562f1eb0407c829ab28a97f1f0477027b3ada5a4b0d4d3f4ee058b7c"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.486630 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qgjzj" event={"ID":"c7dff6ec-6703-40fb-a94a-c1d8b4641703","Type":"ContainerStarted","Data":"76fd31eabde21f076f98a251187e981172fba61be374d89db62d5886076a1db3"} Feb 18 19:36:21 crc kubenswrapper[4932]: E0218 19:36:21.486341 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:21.986328384 +0000 UTC m=+145.568283229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.486051 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.497726 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dpln6" event={"ID":"e6399c54-0b37-424f-8535-f8b0ab33ff52","Type":"ContainerStarted","Data":"040ae9ae49376814e888d658c1d4d2cc66be889dd2988a46d1c05ce4f26c22d8"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.512374 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nqdfv" event={"ID":"908b160b-0e48-4c2c-a35b-45fe25ca093f","Type":"ContainerStarted","Data":"b4dd05743a3c6419d689689c92d01af1f24a5caeb97cde2d086bbd92e07bdc79"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.513203 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qgjzj" podStartSLOduration=122.513162412 podStartE2EDuration="2m2.513162412s" podCreationTimestamp="2026-02-18 19:34:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:21.512439586 +0000 UTC m=+145.094394431" watchObservedRunningTime="2026-02-18 19:36:21.513162412 +0000 UTC m=+145.095117267" Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.513425 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-6xmms" podStartSLOduration=121.513419808 podStartE2EDuration="2m1.513419808s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:21.443963299 +0000 UTC m=+145.025918144" watchObservedRunningTime="2026-02-18 19:36:21.513419808 +0000 UTC m=+145.095374653" Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.529890 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z8hql" event={"ID":"6ed62cdb-a7e1-4366-88b7-7c2ed1102203","Type":"ContainerStarted","Data":"5d1d4fac97628abc1f8040c0c71c49080de76f1b2848ba9e3d4bd0b5514ed587"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.531568 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" event={"ID":"04a7bf0c-8c31-4401-b3db-4b5168a0cac7","Type":"ContainerStarted","Data":"fa9d4cf6a682f199c1c9e21d57871bface75544a08f5ac988425f8a339027b54"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.539764 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8" event={"ID":"9a7e80fe-b260-461e-a11b-633a14eb304d","Type":"ContainerStarted","Data":"e0a441562e2c04b159e2ade7b33d1f20c3cdf4ecdc2d56b221d9d37982ed6960"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.540242 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8" Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.546040 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jr49c" event={"ID":"7a63a8af-95ca-447b-9bfa-7aec1033c0b3","Type":"ContainerStarted","Data":"30225872b0a67aafde07dba9d04b10254757981032e6b7fedb413d3e3b48efc3"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.553693 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" event={"ID":"715b331b-b140-461c-9a06-ba6ede3af8b6","Type":"ContainerStarted","Data":"566b710c5e0072c00efe43c1ee7247f757081caf10fecd43ce4e87d08e80bd49"} Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.560430 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8" podStartSLOduration=121.560410915 podStartE2EDuration="2m1.560410915s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:21.558963893 +0000 UTC m=+145.140918738" watchObservedRunningTime="2026-02-18 19:36:21.560410915 +0000 UTC m=+145.142365760" Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.563949 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.608478 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:21 crc kubenswrapper[4932]: E0218 19:36:21.608690 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:22.108669011 +0000 UTC m=+145.690623856 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.609110 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:21 crc kubenswrapper[4932]: E0218 19:36:21.610258 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:22.110249626 +0000 UTC m=+145.692204471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.616526 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.716263 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:21 crc kubenswrapper[4932]: E0218 19:36:21.716467 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:22.216443793 +0000 UTC m=+145.798398638 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.716694 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:21 crc kubenswrapper[4932]: E0218 19:36:21.717829 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:22.217820534 +0000 UTC m=+145.799775369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.820013 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:21 crc kubenswrapper[4932]: E0218 19:36:21.820238 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:22.320211516 +0000 UTC m=+145.902166361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.820361 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:21 crc kubenswrapper[4932]: E0218 19:36:21.820647 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:22.320634966 +0000 UTC m=+145.902589811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:21 crc kubenswrapper[4932]: I0218 19:36:21.921420 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:21 crc kubenswrapper[4932]: E0218 19:36:21.922071 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:22.422055426 +0000 UTC m=+146.004010271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.029753 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:22 crc kubenswrapper[4932]: E0218 19:36:22.030095 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:22.530084334 +0000 UTC m=+146.112039169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.130415 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:22 crc kubenswrapper[4932]: E0218 19:36:22.130562 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:22.630536974 +0000 UTC m=+146.212491819 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.130658 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:22 crc kubenswrapper[4932]: E0218 19:36:22.131059 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:22.631051415 +0000 UTC m=+146.213006250 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.148954 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-18 19:31:21 +0000 UTC, rotation deadline is 2026-11-14 21:08:45.321195999 +0000 UTC Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.149031 4932 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6457h32m23.172167993s for next certificate rotation Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.235130 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:22 crc kubenswrapper[4932]: E0218 19:36:22.235696 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:22.735683407 +0000 UTC m=+146.317638252 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.336870 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:22 crc kubenswrapper[4932]: E0218 19:36:22.337447 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:22.837408325 +0000 UTC m=+146.419363170 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.437611 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:22 crc kubenswrapper[4932]: E0218 19:36:22.437801 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:22.937751652 +0000 UTC m=+146.519706507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.438069 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:22 crc kubenswrapper[4932]: E0218 19:36:22.438551 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:22.938539899 +0000 UTC m=+146.520494814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.467211 4932 patch_prober.go:28] interesting pod/router-default-5444994796-8xrbm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:36:22 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Feb 18 19:36:22 crc kubenswrapper[4932]: [+]process-running ok Feb 18 19:36:22 crc kubenswrapper[4932]: healthz check failed Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.467286 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8xrbm" podUID="522d227a-c827-415e-9e8b-e5907ba83363" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.539725 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:22 crc kubenswrapper[4932]: E0218 19:36:22.540342 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:23.040305948 +0000 UTC m=+146.622260793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.588992 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-t7f9j" event={"ID":"93bf45fc-6447-479a-83d0-c9418ecb8270","Type":"ContainerStarted","Data":"61b04be33b6c62292c94d7c5d311f781da7ec75322ae958cd3d09e1ea6f8a896"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.592505 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xt96f" event={"ID":"3f0021b0-4c6c-4085-9819-5c94471f320c","Type":"ContainerStarted","Data":"2a9df8a7e9e95a5414f9e00f71069ae0342eda33bd0e7c6f34bd30c564108d6a"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.622650 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" event={"ID":"e39708f9-5d2d-4ed5-9243-7b71ef470ca7","Type":"ContainerStarted","Data":"e945b67be4fe05ce2000bcfc583ec12f15b7e10010995a7a48aa0c973d205d5e"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.623441 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.626339 4932 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5c79p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.626383 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" podUID="e39708f9-5d2d-4ed5-9243-7b71ef470ca7" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.627547 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-f874p" event={"ID":"d939dc03-30d6-4839-abd8-1d8d1bbf8cad","Type":"ContainerStarted","Data":"45f95ff2ddc808c00dbaaefab86b9d86b57309866917df8479462017de9d89d5"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.628300 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-f874p" Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.633550 4932 patch_prober.go:28] interesting pod/console-operator-58897d9998-f874p container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.633619 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-f874p" podUID="d939dc03-30d6-4839-abd8-1d8d1bbf8cad" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.641571 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" event={"ID":"715b331b-b140-461c-9a06-ba6ede3af8b6","Type":"ContainerStarted","Data":"1d0f4e8bd197ca7adeff5e95b9e9c7dc4e55188347e271ee2ac7c783245e466a"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.642497 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.647019 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:22 crc kubenswrapper[4932]: E0218 19:36:22.652723 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:23.152702833 +0000 UTC m=+146.734657888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.661599 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-jx49r" event={"ID":"df2da7f7-2427-4099-ba40-855a7e850256","Type":"ContainerStarted","Data":"b5ed4a6f0bf4f91923dd049718a94650dc99ee1ca654ed90bdaf1097425ae369"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.683096 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-t7f9j" podStartSLOduration=122.6830812 podStartE2EDuration="2m2.6830812s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:22.682360434 +0000 UTC m=+146.264315279" watchObservedRunningTime="2026-02-18 19:36:22.6830812 +0000 UTC m=+146.265036035" Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.687692 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" event={"ID":"6fc8d511-a907-4f74-9a1c-e262d684b6a5","Type":"ContainerStarted","Data":"ba6f8d423ec6421c3748fcba26abc40208902613512ac682c5392e2d4b80bdc5"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.692035 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" event={"ID":"bd39f7e2-211c-4104-a72d-5374a6e95ee1","Type":"ContainerStarted","Data":"537fb16ce4adcd65df62cfc67a3162f784d1e5eeb00f209b9e035773d80fa2de"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.692737 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.694286 4932 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-pkfx8 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.694325 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" podUID="bd39f7e2-211c-4104-a72d-5374a6e95ee1" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.717057 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" event={"ID":"04a7bf0c-8c31-4401-b3db-4b5168a0cac7","Type":"ContainerStarted","Data":"bead4ca36a615a1197bb9f0125682cf01e16d8e7b0a5a7f38b0b82457f5e8d12"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.720016 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rmh4d" event={"ID":"bac9c1de-1cfe-48d3-aafc-ddb41647c661","Type":"ContainerStarted","Data":"806cc07457ba48e38dd072d8b5b28ea7e4c94f3269892cf94e3533b1bdcd1df6"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.726881 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxmcb" event={"ID":"2c6e703e-85e3-4d17-a946-c17e42c27985","Type":"ContainerStarted","Data":"628192a5e10ee337c3fcef88b9c0176582782214135afb5ec2296d7f037fc31a"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.734633 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-jx49r" podStartSLOduration=122.734611859 podStartE2EDuration="2m2.734611859s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:22.728471282 +0000 UTC m=+146.310426117" watchObservedRunningTime="2026-02-18 19:36:22.734611859 +0000 UTC m=+146.316566714" Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.751823 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:22 crc kubenswrapper[4932]: E0218 19:36:22.752911 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:23.252872746 +0000 UTC m=+146.834827771 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.775137 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jr49c" event={"ID":"7a63a8af-95ca-447b-9bfa-7aec1033c0b3","Type":"ContainerStarted","Data":"6cdd4a2a2b9d9a810ebbcaa2625c227df643198b4946b058e045775f4a325219"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.786088 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dpln6" event={"ID":"e6399c54-0b37-424f-8535-f8b0ab33ff52","Type":"ContainerStarted","Data":"18e27fcaaceb83a927ce3fb5d48077583f5d03934b1a64a091ddf4e2953de5b8"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.795567 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z8hql" event={"ID":"6ed62cdb-a7e1-4366-88b7-7c2ed1102203","Type":"ContainerStarted","Data":"569a2d52f045f1d43aee0b414f30b30c374af7bf8028870cbef06575176d5248"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.795613 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z8hql" event={"ID":"6ed62cdb-a7e1-4366-88b7-7c2ed1102203","Type":"ContainerStarted","Data":"5367c9a85ef40742206bc445b261d6f2fb61ef5fb4db699d03b0ec6258478593"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.817197 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nqdfv" event={"ID":"908b160b-0e48-4c2c-a35b-45fe25ca093f","Type":"ContainerStarted","Data":"43d9d1395d43a6f0356745ba024182eac1b823dd5fe9b5bfb1ee28fb5b2c5216"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.849510 4932 generic.go:334] "Generic (PLEG): container finished" podID="26869f13-c7ee-411c-85a1-72338142184c" containerID="bb24a0e12402eddcf6cdc16832787a05797da7fa87b6bb5b59cf4835c2fe80cf" exitCode=0 Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.849578 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" event={"ID":"26869f13-c7ee-411c-85a1-72338142184c","Type":"ContainerDied","Data":"bb24a0e12402eddcf6cdc16832787a05797da7fa87b6bb5b59cf4835c2fe80cf"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.857153 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n288z" event={"ID":"c3710240-88d7-4611-bd77-6de0c54c1e3c","Type":"ContainerStarted","Data":"865186d87f3bf8f8f99a7045b440b20d954cf72d2407262032a86db62f0a1877"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.859016 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:22 crc kubenswrapper[4932]: E0218 19:36:22.861063 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:23.361051347 +0000 UTC m=+146.943006182 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.867556 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfmpj" event={"ID":"547cf2c3-4842-4d4e-ac24-8b2b1ec93a15","Type":"ContainerStarted","Data":"f6950a1501fe5ccdeb0870ba190fb5d884a9323c0e1da591b8925c6c9048b929"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.868159 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfmpj" Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.870083 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9q42v" event={"ID":"4c04fd14-9dfc-4c0f-8125-8663eac51a45","Type":"ContainerStarted","Data":"32eab7f72bd3d49fd5b532bc227959e31e4fff903969e4b5c95bc612859ef888"} Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.905060 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zjx26" Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.917686 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" podStartSLOduration=122.917668239 podStartE2EDuration="2m2.917668239s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:22.831868107 +0000 UTC m=+146.413822952" watchObservedRunningTime="2026-02-18 19:36:22.917668239 +0000 UTC m=+146.499623084" Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.918148 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xt96f" podStartSLOduration=122.91814313 podStartE2EDuration="2m2.91814313s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:22.908890234 +0000 UTC m=+146.490845079" watchObservedRunningTime="2026-02-18 19:36:22.91814313 +0000 UTC m=+146.500097975" Feb 18 19:36:22 crc kubenswrapper[4932]: I0218 19:36:22.962795 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:22 crc kubenswrapper[4932]: E0218 19:36:22.963321 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:23.463306136 +0000 UTC m=+147.045260981 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.022116 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-f874p" podStartSLOduration=123.022088106 podStartE2EDuration="2m3.022088106s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:22.973858631 +0000 UTC m=+146.555813476" watchObservedRunningTime="2026-02-18 19:36:23.022088106 +0000 UTC m=+146.604042951" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.023069 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" podStartSLOduration=123.023064668 podStartE2EDuration="2m3.023064668s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:23.016954121 +0000 UTC m=+146.598908986" watchObservedRunningTime="2026-02-18 19:36:23.023064668 +0000 UTC m=+146.605019513" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.049169 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-rmh4d" podStartSLOduration=8.049150559 podStartE2EDuration="8.049150559s" podCreationTimestamp="2026-02-18 19:36:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:23.048628448 +0000 UTC m=+146.630583293" watchObservedRunningTime="2026-02-18 19:36:23.049150559 +0000 UTC m=+146.631105404" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.067469 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:23 crc kubenswrapper[4932]: E0218 19:36:23.070142 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:23.570126637 +0000 UTC m=+147.152081482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.093539 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jxmcb" podStartSLOduration=123.093520688 podStartE2EDuration="2m3.093520688s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:23.092322212 +0000 UTC m=+146.674277057" watchObservedRunningTime="2026-02-18 19:36:23.093520688 +0000 UTC m=+146.675475533" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.116677 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zfbtq" podStartSLOduration=123.116657014 podStartE2EDuration="2m3.116657014s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:23.116354977 +0000 UTC m=+146.698309822" watchObservedRunningTime="2026-02-18 19:36:23.116657014 +0000 UTC m=+146.698611859" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.168587 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfmpj" podStartSLOduration=123.168571091 podStartE2EDuration="2m3.168571091s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:23.165668536 +0000 UTC m=+146.747623381" watchObservedRunningTime="2026-02-18 19:36:23.168571091 +0000 UTC m=+146.750525946" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.170718 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:23 crc kubenswrapper[4932]: E0218 19:36:23.171131 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:23.671115608 +0000 UTC m=+147.253070443 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.239541 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-jr49c" podStartSLOduration=124.239525383 podStartE2EDuration="2m4.239525383s" podCreationTimestamp="2026-02-18 19:34:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:23.204532863 +0000 UTC m=+146.786487708" watchObservedRunningTime="2026-02-18 19:36:23.239525383 +0000 UTC m=+146.821480228" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.273245 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:23 crc kubenswrapper[4932]: E0218 19:36:23.273713 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:23.773700734 +0000 UTC m=+147.355655579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.276479 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" podStartSLOduration=123.276445876 podStartE2EDuration="2m3.276445876s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:23.240080365 +0000 UTC m=+146.822035210" watchObservedRunningTime="2026-02-18 19:36:23.276445876 +0000 UTC m=+146.858400721" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.311339 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.311430 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.317312 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z8hql" podStartSLOduration=123.317293016 podStartE2EDuration="2m3.317293016s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:23.316734744 +0000 UTC m=+146.898689589" watchObservedRunningTime="2026-02-18 19:36:23.317293016 +0000 UTC m=+146.899247861" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.317787 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n288z" podStartSLOduration=123.317783127 podStartE2EDuration="2m3.317783127s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:23.298648821 +0000 UTC m=+146.880603676" watchObservedRunningTime="2026-02-18 19:36:23.317783127 +0000 UTC m=+146.899737972" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.351855 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.351897 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.353930 4932 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-z2jc5 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.353984 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" podUID="26869f13-c7ee-411c-85a1-72338142184c" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.382606 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:23 crc kubenswrapper[4932]: E0218 19:36:23.383012 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:23.882996001 +0000 UTC m=+147.464950846 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.405826 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9q42v" podStartSLOduration=123.405805799 podStartE2EDuration="2m3.405805799s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:23.405321648 +0000 UTC m=+146.987276483" watchObservedRunningTime="2026-02-18 19:36:23.405805799 +0000 UTC m=+146.987760644" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.407560 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" podStartSLOduration=123.407555268 podStartE2EDuration="2m3.407555268s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:23.368742733 +0000 UTC m=+146.950697578" watchObservedRunningTime="2026-02-18 19:36:23.407555268 +0000 UTC m=+146.989510113" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.474787 4932 patch_prober.go:28] interesting pod/router-default-5444994796-8xrbm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:36:23 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Feb 18 19:36:23 crc kubenswrapper[4932]: [+]process-running ok Feb 18 19:36:23 crc kubenswrapper[4932]: healthz check failed Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.474868 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8xrbm" podUID="522d227a-c827-415e-9e8b-e5907ba83363" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.485736 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:23 crc kubenswrapper[4932]: E0218 19:36:23.486142 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:23.98612754 +0000 UTC m=+147.568082385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.587507 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:23 crc kubenswrapper[4932]: E0218 19:36:23.587674 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.087645192 +0000 UTC m=+147.669600037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.587935 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:23 crc kubenswrapper[4932]: E0218 19:36:23.588227 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.088219895 +0000 UTC m=+147.670174740 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.642624 4932 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tcbfq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.642702 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" podUID="715b331b-b140-461c-9a06-ba6ede3af8b6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.688781 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:23 crc kubenswrapper[4932]: E0218 19:36:23.689002 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.188967081 +0000 UTC m=+147.770921956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.689134 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:23 crc kubenswrapper[4932]: E0218 19:36:23.689507 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.189497553 +0000 UTC m=+147.771452398 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.790225 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:23 crc kubenswrapper[4932]: E0218 19:36:23.790426 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.290402832 +0000 UTC m=+147.872357677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.790605 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:23 crc kubenswrapper[4932]: E0218 19:36:23.790933 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.290921494 +0000 UTC m=+147.872876329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.876882 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" event={"ID":"26869f13-c7ee-411c-85a1-72338142184c","Type":"ContainerStarted","Data":"239ed198d59b96808be74e97039276c1810e62ae4de8443d043aa9f97be81653"} Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.879054 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nzrr6" event={"ID":"aa8e769a-613b-40f2-9d07-b034d7871302","Type":"ContainerStarted","Data":"5ee0cb22ae47bde8e853512d61accb4cca285d661e453c10d04fa3d29d05a20c"} Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.879110 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nzrr6" event={"ID":"aa8e769a-613b-40f2-9d07-b034d7871302","Type":"ContainerStarted","Data":"5d93c144fbca8fc0014bf5b89687731042631cdcf1fff74a250418795005bb47"} Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.881151 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nqdfv" event={"ID":"908b160b-0e48-4c2c-a35b-45fe25ca093f","Type":"ContainerStarted","Data":"a85b3d7f77355b2a045efa0c57cd1557dbb335545fa660a66e01202a29c65be7"} Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.881367 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-nqdfv" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.883360 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vqskh" event={"ID":"0a167a2c-fdc1-4d22-83b7-f1a63ab147bc","Type":"ContainerStarted","Data":"e16d8a42d6545579a2b014e5f758a4bd7cbf5d469e32e2cc2ca89414315db8d2"} Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.883396 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vqskh" event={"ID":"0a167a2c-fdc1-4d22-83b7-f1a63ab147bc","Type":"ContainerStarted","Data":"de5ce862f938b0ae728fa02d72d8c0d92a79e116ac85b6db385eb6ea4b2e5b3a"} Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.884675 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" event={"ID":"04a7bf0c-8c31-4401-b3db-4b5168a0cac7","Type":"ContainerStarted","Data":"44b7c1eb8596dbf466c104e3e4d456eee051d52c8cffdfae98e624914a22a157"} Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.886671 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-845v8" event={"ID":"998697c8-1e0d-46ae-b92f-ae8faf0faef5","Type":"ContainerStarted","Data":"55c79ab5b58370b271c3ff0ecc683b29415a3303fb48cee7fe133d9f4bd56ee9"} Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.887785 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ng7nn" event={"ID":"74a8d999-1731-4a72-8ca8-25913744a8e7","Type":"ContainerStarted","Data":"a2ab08dd02b277ccf9b1b5656451c3ddf4010c8961d88e2156ef4096cfcbfeae"} Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.888234 4932 patch_prober.go:28] interesting pod/console-operator-58897d9998-f874p container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.888273 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-f874p" podUID="d939dc03-30d6-4839-abd8-1d8d1bbf8cad" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.888657 4932 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-pkfx8 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.888709 4932 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5c79p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.888718 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" podUID="bd39f7e2-211c-4104-a72d-5374a6e95ee1" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.888741 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" podUID="e39708f9-5d2d-4ed5-9243-7b71ef470ca7" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.891109 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:23 crc kubenswrapper[4932]: E0218 19:36:23.893756 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.393739866 +0000 UTC m=+147.975694711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.913078 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-nzrr6" podStartSLOduration=123.913061646 podStartE2EDuration="2m3.913061646s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:23.910615172 +0000 UTC m=+147.492570007" watchObservedRunningTime="2026-02-18 19:36:23.913061646 +0000 UTC m=+147.495016491" Feb 18 19:36:23 crc kubenswrapper[4932]: I0218 19:36:23.993662 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:24 crc kubenswrapper[4932]: E0218 19:36:24.003382 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.503367509 +0000 UTC m=+148.085322344 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.023063 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bch48" podStartSLOduration=124.023047698 podStartE2EDuration="2m4.023047698s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:24.022184169 +0000 UTC m=+147.604139014" watchObservedRunningTime="2026-02-18 19:36:24.023047698 +0000 UTC m=+147.605002543" Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.023273 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ng7nn" podStartSLOduration=124.023268833 podStartE2EDuration="2m4.023268833s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:23.963156053 +0000 UTC m=+147.545110888" watchObservedRunningTime="2026-02-18 19:36:24.023268833 +0000 UTC m=+147.605223678" Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.065712 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-vqskh" podStartSLOduration=124.065683198 podStartE2EDuration="2m4.065683198s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:24.063107321 +0000 UTC m=+147.645062166" watchObservedRunningTime="2026-02-18 19:36:24.065683198 +0000 UTC m=+147.647638043" Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.097762 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:24 crc kubenswrapper[4932]: E0218 19:36:24.098867 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.598834757 +0000 UTC m=+148.180789612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.127877 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gvnf8" Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.149596 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-nqdfv" podStartSLOduration=9.149570108 podStartE2EDuration="9.149570108s" podCreationTimestamp="2026-02-18 19:36:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:24.109513835 +0000 UTC m=+147.691468680" watchObservedRunningTime="2026-02-18 19:36:24.149570108 +0000 UTC m=+147.731524953" Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.204413 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:24 crc kubenswrapper[4932]: E0218 19:36:24.204989 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.704973823 +0000 UTC m=+148.286928668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.229725 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-845v8" podStartSLOduration=124.229709225 podStartE2EDuration="2m4.229709225s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:24.151356538 +0000 UTC m=+147.733311383" watchObservedRunningTime="2026-02-18 19:36:24.229709225 +0000 UTC m=+147.811664070" Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.305820 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:24 crc kubenswrapper[4932]: E0218 19:36:24.306072 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.806032616 +0000 UTC m=+148.387987461 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.306147 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:24 crc kubenswrapper[4932]: E0218 19:36:24.306591 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.806580758 +0000 UTC m=+148.388535603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.400554 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tcbfq" Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.407028 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:24 crc kubenswrapper[4932]: E0218 19:36:24.407285 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.907247832 +0000 UTC m=+148.489202677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.407438 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:24 crc kubenswrapper[4932]: E0218 19:36:24.407753 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:24.907739043 +0000 UTC m=+148.489693888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.463314 4932 patch_prober.go:28] interesting pod/router-default-5444994796-8xrbm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:36:24 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Feb 18 19:36:24 crc kubenswrapper[4932]: [+]process-running ok Feb 18 19:36:24 crc kubenswrapper[4932]: healthz check failed Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.463399 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8xrbm" podUID="522d227a-c827-415e-9e8b-e5907ba83363" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.508385 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:24 crc kubenswrapper[4932]: E0218 19:36:24.508715 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:25.008674793 +0000 UTC m=+148.590629638 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.609805 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:24 crc kubenswrapper[4932]: E0218 19:36:24.610156 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:25.110139495 +0000 UTC m=+148.692094340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.711484 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:24 crc kubenswrapper[4932]: E0218 19:36:24.711669 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:25.211641877 +0000 UTC m=+148.793596712 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.711804 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:24 crc kubenswrapper[4932]: E0218 19:36:24.712133 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:25.212121898 +0000 UTC m=+148.794076743 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.812624 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:24 crc kubenswrapper[4932]: E0218 19:36:24.812790 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:25.312765531 +0000 UTC m=+148.894720376 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.812903 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:24 crc kubenswrapper[4932]: E0218 19:36:24.813215 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:25.313208001 +0000 UTC m=+148.895162846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.914386 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:24 crc kubenswrapper[4932]: E0218 19:36:24.914840 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:25.414825966 +0000 UTC m=+148.996780801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.915471 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dpln6" event={"ID":"e6399c54-0b37-424f-8535-f8b0ab33ff52","Type":"ContainerStarted","Data":"f26d727abe0fc9698fae8597feb6855bce1b709ba1153cca04bba8c6076eb619"} Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.915513 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dpln6" event={"ID":"e6399c54-0b37-424f-8535-f8b0ab33ff52","Type":"ContainerStarted","Data":"56fd0fac02496d5c2cab9a02b201698b7fe530f9859cdad374e4bc8433737306"} Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.936998 4932 generic.go:334] "Generic (PLEG): container finished" podID="048e17bc-05bf-40e4-9f40-87d936fcf772" containerID="67de493045dcef40d7dcd7366beacc478832b3155bced8f9164fd20b4a4dc42d" exitCode=0 Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.938095 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" event={"ID":"048e17bc-05bf-40e4-9f40-87d936fcf772","Type":"ContainerDied","Data":"67de493045dcef40d7dcd7366beacc478832b3155bced8f9164fd20b4a4dc42d"} Feb 18 19:36:24 crc kubenswrapper[4932]: I0218 19:36:24.953319 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pkfx8" Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.015872 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.016075 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.016253 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.016349 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.016484 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:36:25 crc kubenswrapper[4932]: E0218 19:36:25.019493 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:25.519476499 +0000 UTC m=+149.101431344 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.027837 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.031168 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.064345 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.069793 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.111304 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.116472 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.116957 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:25 crc kubenswrapper[4932]: E0218 19:36:25.117323 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:25.61730861 +0000 UTC m=+149.199263455 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.156434 4932 patch_prober.go:28] interesting pod/apiserver-76f77b778f-jr49c container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 18 19:36:25 crc kubenswrapper[4932]: [+]log ok Feb 18 19:36:25 crc kubenswrapper[4932]: [+]etcd ok Feb 18 19:36:25 crc kubenswrapper[4932]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 18 19:36:25 crc kubenswrapper[4932]: [+]poststarthook/generic-apiserver-start-informers ok Feb 18 19:36:25 crc kubenswrapper[4932]: [+]poststarthook/max-in-flight-filter ok Feb 18 19:36:25 crc kubenswrapper[4932]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 18 19:36:25 crc kubenswrapper[4932]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 18 19:36:25 crc kubenswrapper[4932]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 18 19:36:25 crc kubenswrapper[4932]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 18 19:36:25 crc kubenswrapper[4932]: [+]poststarthook/project.openshift.io-projectcache ok Feb 18 19:36:25 crc kubenswrapper[4932]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 18 19:36:25 crc kubenswrapper[4932]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Feb 18 19:36:25 crc kubenswrapper[4932]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 18 19:36:25 crc kubenswrapper[4932]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 18 19:36:25 crc kubenswrapper[4932]: livez check failed Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.156515 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-jr49c" podUID="7a63a8af-95ca-447b-9bfa-7aec1033c0b3" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.219903 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:25 crc kubenswrapper[4932]: E0218 19:36:25.220215 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:25.720203523 +0000 UTC m=+149.302158368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.299858 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.321189 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:25 crc kubenswrapper[4932]: E0218 19:36:25.321541 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:25.821525872 +0000 UTC m=+149.403480707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.393354 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-f874p" Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.426022 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:25 crc kubenswrapper[4932]: E0218 19:36:25.426538 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:25.926515002 +0000 UTC m=+149.508469847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.465650 4932 patch_prober.go:28] interesting pod/router-default-5444994796-8xrbm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:36:25 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Feb 18 19:36:25 crc kubenswrapper[4932]: [+]process-running ok Feb 18 19:36:25 crc kubenswrapper[4932]: healthz check failed Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.465698 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8xrbm" podUID="522d227a-c827-415e-9e8b-e5907ba83363" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.528578 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:25 crc kubenswrapper[4932]: E0218 19:36:25.528928 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:26.028910655 +0000 UTC m=+149.610865500 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.610350 4932 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.629980 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:25 crc kubenswrapper[4932]: E0218 19:36:25.630325 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:26.130314855 +0000 UTC m=+149.712269700 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.731667 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:25 crc kubenswrapper[4932]: E0218 19:36:25.731851 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:26.231824898 +0000 UTC m=+149.813779743 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.732159 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:25 crc kubenswrapper[4932]: E0218 19:36:25.732476 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:26.232462182 +0000 UTC m=+149.814417027 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:25 crc kubenswrapper[4932]: W0218 19:36:25.733021 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-ea99bcfee0ed69adde71a056c94f7c19c6aa541b23b145fc871416ba0eef77fb WatchSource:0}: Error finding container ea99bcfee0ed69adde71a056c94f7c19c6aa541b23b145fc871416ba0eef77fb: Status 404 returned error can't find the container with id ea99bcfee0ed69adde71a056c94f7c19c6aa541b23b145fc871416ba0eef77fb Feb 18 19:36:25 crc kubenswrapper[4932]: W0218 19:36:25.744220 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-c97b1629fbb9f6bf94aa9fd6d81fbb8855792d96aa4ba2de338a2a6637ac63d3 WatchSource:0}: Error finding container c97b1629fbb9f6bf94aa9fd6d81fbb8855792d96aa4ba2de338a2a6637ac63d3: Status 404 returned error can't find the container with id c97b1629fbb9f6bf94aa9fd6d81fbb8855792d96aa4ba2de338a2a6637ac63d3 Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.833723 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:25 crc kubenswrapper[4932]: E0218 19:36:25.833925 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:26.333899433 +0000 UTC m=+149.915854278 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.834150 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:25 crc kubenswrapper[4932]: E0218 19:36:25.834451 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:26.334436115 +0000 UTC m=+149.916390960 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.935361 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:25 crc kubenswrapper[4932]: E0218 19:36:25.935740 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:26.435724473 +0000 UTC m=+150.017679318 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.943025 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"8fbdfafb929c168722cdd1f70dda4e7d3cd8f76b642674f6d96f43c8c513012d"} Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.943068 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ea99bcfee0ed69adde71a056c94f7c19c6aa541b23b145fc871416ba0eef77fb"} Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.945014 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3298fa47b7c069a0ac00cfcb47c585b1b0f88dc04b22062b39cb968deb3779c7"} Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.945039 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"1ead9a2e78d8535670a0692c6bf2d7d511128341670e4780c7705a52db0d44db"} Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.947293 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dpln6" event={"ID":"e6399c54-0b37-424f-8535-f8b0ab33ff52","Type":"ContainerStarted","Data":"ac52211488c7f4e4b473ddff013d43fd9f96b4d445d5d9074217ca74e851c3e5"} Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.949114 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"82404893f52cd3bf9e4aad0eef54ec40591fd2b76b9faed5bbc03a6a56c0763b"} Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.949139 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"c97b1629fbb9f6bf94aa9fd6d81fbb8855792d96aa4ba2de338a2a6637ac63d3"} Feb 18 19:36:25 crc kubenswrapper[4932]: I0218 19:36:25.949427 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.037077 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:26 crc kubenswrapper[4932]: E0218 19:36:26.040364 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:26.540351515 +0000 UTC m=+150.122306360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.138770 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:26 crc kubenswrapper[4932]: E0218 19:36:26.139155 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:26.639136597 +0000 UTC m=+150.221091442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.164761 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.178783 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-dpln6" podStartSLOduration=11.17876644 podStartE2EDuration="11.17876644s" podCreationTimestamp="2026-02-18 19:36:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:26.030500985 +0000 UTC m=+149.612455830" watchObservedRunningTime="2026-02-18 19:36:26.17876644 +0000 UTC m=+149.760721295" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.240495 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-496qr\" (UniqueName: \"kubernetes.io/projected/048e17bc-05bf-40e4-9f40-87d936fcf772-kube-api-access-496qr\") pod \"048e17bc-05bf-40e4-9f40-87d936fcf772\" (UID: \"048e17bc-05bf-40e4-9f40-87d936fcf772\") " Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.240549 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/048e17bc-05bf-40e4-9f40-87d936fcf772-config-volume\") pod \"048e17bc-05bf-40e4-9f40-87d936fcf772\" (UID: \"048e17bc-05bf-40e4-9f40-87d936fcf772\") " Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.240629 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/048e17bc-05bf-40e4-9f40-87d936fcf772-secret-volume\") pod \"048e17bc-05bf-40e4-9f40-87d936fcf772\" (UID: \"048e17bc-05bf-40e4-9f40-87d936fcf772\") " Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.240909 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.241095 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/048e17bc-05bf-40e4-9f40-87d936fcf772-config-volume" (OuterVolumeSpecName: "config-volume") pod "048e17bc-05bf-40e4-9f40-87d936fcf772" (UID: "048e17bc-05bf-40e4-9f40-87d936fcf772"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:26 crc kubenswrapper[4932]: E0218 19:36:26.241310 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:26.741295424 +0000 UTC m=+150.323250269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.245871 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/048e17bc-05bf-40e4-9f40-87d936fcf772-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "048e17bc-05bf-40e4-9f40-87d936fcf772" (UID: "048e17bc-05bf-40e4-9f40-87d936fcf772"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.245928 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/048e17bc-05bf-40e4-9f40-87d936fcf772-kube-api-access-496qr" (OuterVolumeSpecName: "kube-api-access-496qr") pod "048e17bc-05bf-40e4-9f40-87d936fcf772" (UID: "048e17bc-05bf-40e4-9f40-87d936fcf772"). InnerVolumeSpecName "kube-api-access-496qr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.337459 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qvwc8"] Feb 18 19:36:26 crc kubenswrapper[4932]: E0218 19:36:26.337640 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="048e17bc-05bf-40e4-9f40-87d936fcf772" containerName="collect-profiles" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.337665 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="048e17bc-05bf-40e4-9f40-87d936fcf772" containerName="collect-profiles" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.337752 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="048e17bc-05bf-40e4-9f40-87d936fcf772" containerName="collect-profiles" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.338371 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qvwc8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.342937 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.343199 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:26 crc kubenswrapper[4932]: E0218 19:36:26.343310 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:36:26.843292628 +0000 UTC m=+150.425247473 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.343498 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.343550 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-496qr\" (UniqueName: \"kubernetes.io/projected/048e17bc-05bf-40e4-9f40-87d936fcf772-kube-api-access-496qr\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.343566 4932 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/048e17bc-05bf-40e4-9f40-87d936fcf772-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.343579 4932 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/048e17bc-05bf-40e4-9f40-87d936fcf772-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:26 crc kubenswrapper[4932]: E0218 19:36:26.343788 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:36:26.843781329 +0000 UTC m=+150.425736174 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wlcbj" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.351536 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qvwc8"] Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.393297 4932 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-18T19:36:25.61037117Z","Handler":null,"Name":""} Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.397958 4932 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.397995 4932 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.444651 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.444855 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr45c\" (UniqueName: \"kubernetes.io/projected/cafe1e82-ef19-4345-825e-cc9bf016b353-kube-api-access-sr45c\") pod \"certified-operators-qvwc8\" (UID: \"cafe1e82-ef19-4345-825e-cc9bf016b353\") " pod="openshift-marketplace/certified-operators-qvwc8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.444893 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cafe1e82-ef19-4345-825e-cc9bf016b353-catalog-content\") pod \"certified-operators-qvwc8\" (UID: \"cafe1e82-ef19-4345-825e-cc9bf016b353\") " pod="openshift-marketplace/certified-operators-qvwc8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.444917 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cafe1e82-ef19-4345-825e-cc9bf016b353-utilities\") pod \"certified-operators-qvwc8\" (UID: \"cafe1e82-ef19-4345-825e-cc9bf016b353\") " pod="openshift-marketplace/certified-operators-qvwc8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.448133 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.466421 4932 patch_prober.go:28] interesting pod/router-default-5444994796-8xrbm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:36:26 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Feb 18 19:36:26 crc kubenswrapper[4932]: [+]process-running ok Feb 18 19:36:26 crc kubenswrapper[4932]: healthz check failed Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.466492 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8xrbm" podUID="522d227a-c827-415e-9e8b-e5907ba83363" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.532611 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j2xgw"] Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.533455 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j2xgw" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.537871 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.546282 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr45c\" (UniqueName: \"kubernetes.io/projected/cafe1e82-ef19-4345-825e-cc9bf016b353-kube-api-access-sr45c\") pod \"certified-operators-qvwc8\" (UID: \"cafe1e82-ef19-4345-825e-cc9bf016b353\") " pod="openshift-marketplace/certified-operators-qvwc8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.546345 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.546377 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cafe1e82-ef19-4345-825e-cc9bf016b353-catalog-content\") pod \"certified-operators-qvwc8\" (UID: \"cafe1e82-ef19-4345-825e-cc9bf016b353\") " pod="openshift-marketplace/certified-operators-qvwc8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.546411 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cafe1e82-ef19-4345-825e-cc9bf016b353-utilities\") pod \"certified-operators-qvwc8\" (UID: \"cafe1e82-ef19-4345-825e-cc9bf016b353\") " pod="openshift-marketplace/certified-operators-qvwc8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.547224 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cafe1e82-ef19-4345-825e-cc9bf016b353-utilities\") pod \"certified-operators-qvwc8\" (UID: \"cafe1e82-ef19-4345-825e-cc9bf016b353\") " pod="openshift-marketplace/certified-operators-qvwc8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.547644 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cafe1e82-ef19-4345-825e-cc9bf016b353-catalog-content\") pod \"certified-operators-qvwc8\" (UID: \"cafe1e82-ef19-4345-825e-cc9bf016b353\") " pod="openshift-marketplace/certified-operators-qvwc8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.549640 4932 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.549694 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.553403 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j2xgw"] Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.567388 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr45c\" (UniqueName: \"kubernetes.io/projected/cafe1e82-ef19-4345-825e-cc9bf016b353-kube-api-access-sr45c\") pod \"certified-operators-qvwc8\" (UID: \"cafe1e82-ef19-4345-825e-cc9bf016b353\") " pod="openshift-marketplace/certified-operators-qvwc8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.584411 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wlcbj\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.647593 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62bbf001-ce57-471f-ad28-1d892d0d30e9-catalog-content\") pod \"community-operators-j2xgw\" (UID: \"62bbf001-ce57-471f-ad28-1d892d0d30e9\") " pod="openshift-marketplace/community-operators-j2xgw" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.647688 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62bbf001-ce57-471f-ad28-1d892d0d30e9-utilities\") pod \"community-operators-j2xgw\" (UID: \"62bbf001-ce57-471f-ad28-1d892d0d30e9\") " pod="openshift-marketplace/community-operators-j2xgw" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.647710 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgm8v\" (UniqueName: \"kubernetes.io/projected/62bbf001-ce57-471f-ad28-1d892d0d30e9-kube-api-access-rgm8v\") pod \"community-operators-j2xgw\" (UID: \"62bbf001-ce57-471f-ad28-1d892d0d30e9\") " pod="openshift-marketplace/community-operators-j2xgw" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.656481 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qvwc8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.745617 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gbkr8"] Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.748444 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gbkr8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.753842 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgm8v\" (UniqueName: \"kubernetes.io/projected/62bbf001-ce57-471f-ad28-1d892d0d30e9-kube-api-access-rgm8v\") pod \"community-operators-j2xgw\" (UID: \"62bbf001-ce57-471f-ad28-1d892d0d30e9\") " pod="openshift-marketplace/community-operators-j2xgw" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.753938 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62bbf001-ce57-471f-ad28-1d892d0d30e9-catalog-content\") pod \"community-operators-j2xgw\" (UID: \"62bbf001-ce57-471f-ad28-1d892d0d30e9\") " pod="openshift-marketplace/community-operators-j2xgw" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.754066 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62bbf001-ce57-471f-ad28-1d892d0d30e9-utilities\") pod \"community-operators-j2xgw\" (UID: \"62bbf001-ce57-471f-ad28-1d892d0d30e9\") " pod="openshift-marketplace/community-operators-j2xgw" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.754820 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62bbf001-ce57-471f-ad28-1d892d0d30e9-utilities\") pod \"community-operators-j2xgw\" (UID: \"62bbf001-ce57-471f-ad28-1d892d0d30e9\") " pod="openshift-marketplace/community-operators-j2xgw" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.755678 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62bbf001-ce57-471f-ad28-1d892d0d30e9-catalog-content\") pod \"community-operators-j2xgw\" (UID: \"62bbf001-ce57-471f-ad28-1d892d0d30e9\") " pod="openshift-marketplace/community-operators-j2xgw" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.759013 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gbkr8"] Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.780664 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgm8v\" (UniqueName: \"kubernetes.io/projected/62bbf001-ce57-471f-ad28-1d892d0d30e9-kube-api-access-rgm8v\") pod \"community-operators-j2xgw\" (UID: \"62bbf001-ce57-471f-ad28-1d892d0d30e9\") " pod="openshift-marketplace/community-operators-j2xgw" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.841085 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.849920 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j2xgw" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.855653 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a4229b-f53b-4cd7-b81b-7fc2dfded045-utilities\") pod \"certified-operators-gbkr8\" (UID: \"29a4229b-f53b-4cd7-b81b-7fc2dfded045\") " pod="openshift-marketplace/certified-operators-gbkr8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.855722 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a4229b-f53b-4cd7-b81b-7fc2dfded045-catalog-content\") pod \"certified-operators-gbkr8\" (UID: \"29a4229b-f53b-4cd7-b81b-7fc2dfded045\") " pod="openshift-marketplace/certified-operators-gbkr8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.855755 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hghw\" (UniqueName: \"kubernetes.io/projected/29a4229b-f53b-4cd7-b81b-7fc2dfded045-kube-api-access-6hghw\") pod \"certified-operators-gbkr8\" (UID: \"29a4229b-f53b-4cd7-b81b-7fc2dfded045\") " pod="openshift-marketplace/certified-operators-gbkr8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.877991 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qvwc8"] Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.938977 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p69tc"] Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.943035 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p69tc" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.959641 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hghw\" (UniqueName: \"kubernetes.io/projected/29a4229b-f53b-4cd7-b81b-7fc2dfded045-kube-api-access-6hghw\") pod \"certified-operators-gbkr8\" (UID: \"29a4229b-f53b-4cd7-b81b-7fc2dfded045\") " pod="openshift-marketplace/certified-operators-gbkr8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.959752 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a4229b-f53b-4cd7-b81b-7fc2dfded045-utilities\") pod \"certified-operators-gbkr8\" (UID: \"29a4229b-f53b-4cd7-b81b-7fc2dfded045\") " pod="openshift-marketplace/certified-operators-gbkr8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.959871 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a4229b-f53b-4cd7-b81b-7fc2dfded045-catalog-content\") pod \"certified-operators-gbkr8\" (UID: \"29a4229b-f53b-4cd7-b81b-7fc2dfded045\") " pod="openshift-marketplace/certified-operators-gbkr8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.960508 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a4229b-f53b-4cd7-b81b-7fc2dfded045-catalog-content\") pod \"certified-operators-gbkr8\" (UID: \"29a4229b-f53b-4cd7-b81b-7fc2dfded045\") " pod="openshift-marketplace/certified-operators-gbkr8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.960987 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a4229b-f53b-4cd7-b81b-7fc2dfded045-utilities\") pod \"certified-operators-gbkr8\" (UID: \"29a4229b-f53b-4cd7-b81b-7fc2dfded045\") " pod="openshift-marketplace/certified-operators-gbkr8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.976355 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p69tc"] Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.987042 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hghw\" (UniqueName: \"kubernetes.io/projected/29a4229b-f53b-4cd7-b81b-7fc2dfded045-kube-api-access-6hghw\") pod \"certified-operators-gbkr8\" (UID: \"29a4229b-f53b-4cd7-b81b-7fc2dfded045\") " pod="openshift-marketplace/certified-operators-gbkr8" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.990009 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" event={"ID":"048e17bc-05bf-40e4-9f40-87d936fcf772","Type":"ContainerDied","Data":"9614a2e59be43910c023c657f9503a561993abaeac9a3f60c668134e54e7399b"} Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.990050 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9614a2e59be43910c023c657f9503a561993abaeac9a3f60c668134e54e7399b" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.990183 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc" Feb 18 19:36:26 crc kubenswrapper[4932]: I0218 19:36:26.991275 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qvwc8" event={"ID":"cafe1e82-ef19-4345-825e-cc9bf016b353","Type":"ContainerStarted","Data":"94c56c7588969970298ca76c9989e0d42da323b423ba2e42eec0825109130ea6"} Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.061318 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81ac7afd-2261-4af0-9b59-f18c98424c21-utilities\") pod \"community-operators-p69tc\" (UID: \"81ac7afd-2261-4af0-9b59-f18c98424c21\") " pod="openshift-marketplace/community-operators-p69tc" Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.061360 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81ac7afd-2261-4af0-9b59-f18c98424c21-catalog-content\") pod \"community-operators-p69tc\" (UID: \"81ac7afd-2261-4af0-9b59-f18c98424c21\") " pod="openshift-marketplace/community-operators-p69tc" Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.061409 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvppw\" (UniqueName: \"kubernetes.io/projected/81ac7afd-2261-4af0-9b59-f18c98424c21-kube-api-access-vvppw\") pod \"community-operators-p69tc\" (UID: \"81ac7afd-2261-4af0-9b59-f18c98424c21\") " pod="openshift-marketplace/community-operators-p69tc" Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.074930 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gbkr8" Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.075275 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wlcbj"] Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.091024 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j2xgw"] Feb 18 19:36:27 crc kubenswrapper[4932]: W0218 19:36:27.103862 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62bbf001_ce57_471f_ad28_1d892d0d30e9.slice/crio-598a3819cd069f787a558e804a3b29d8f39ee54c7fd7148d56ad085f056a9d34 WatchSource:0}: Error finding container 598a3819cd069f787a558e804a3b29d8f39ee54c7fd7148d56ad085f056a9d34: Status 404 returned error can't find the container with id 598a3819cd069f787a558e804a3b29d8f39ee54c7fd7148d56ad085f056a9d34 Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.162284 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81ac7afd-2261-4af0-9b59-f18c98424c21-utilities\") pod \"community-operators-p69tc\" (UID: \"81ac7afd-2261-4af0-9b59-f18c98424c21\") " pod="openshift-marketplace/community-operators-p69tc" Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.162351 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81ac7afd-2261-4af0-9b59-f18c98424c21-catalog-content\") pod \"community-operators-p69tc\" (UID: \"81ac7afd-2261-4af0-9b59-f18c98424c21\") " pod="openshift-marketplace/community-operators-p69tc" Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.162894 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81ac7afd-2261-4af0-9b59-f18c98424c21-utilities\") pod \"community-operators-p69tc\" (UID: \"81ac7afd-2261-4af0-9b59-f18c98424c21\") " pod="openshift-marketplace/community-operators-p69tc" Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.162818 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81ac7afd-2261-4af0-9b59-f18c98424c21-catalog-content\") pod \"community-operators-p69tc\" (UID: \"81ac7afd-2261-4af0-9b59-f18c98424c21\") " pod="openshift-marketplace/community-operators-p69tc" Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.162939 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvppw\" (UniqueName: \"kubernetes.io/projected/81ac7afd-2261-4af0-9b59-f18c98424c21-kube-api-access-vvppw\") pod \"community-operators-p69tc\" (UID: \"81ac7afd-2261-4af0-9b59-f18c98424c21\") " pod="openshift-marketplace/community-operators-p69tc" Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.186978 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvppw\" (UniqueName: \"kubernetes.io/projected/81ac7afd-2261-4af0-9b59-f18c98424c21-kube-api-access-vvppw\") pod \"community-operators-p69tc\" (UID: \"81ac7afd-2261-4af0-9b59-f18c98424c21\") " pod="openshift-marketplace/community-operators-p69tc" Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.188134 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.276576 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gbkr8"] Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.278909 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p69tc" Feb 18 19:36:27 crc kubenswrapper[4932]: W0218 19:36:27.334646 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29a4229b_f53b_4cd7_b81b_7fc2dfded045.slice/crio-a2c366de25c0453f7a2db8d06c018b6056eb68e4c159566103c144c6b3b72029 WatchSource:0}: Error finding container a2c366de25c0453f7a2db8d06c018b6056eb68e4c159566103c144c6b3b72029: Status 404 returned error can't find the container with id a2c366de25c0453f7a2db8d06c018b6056eb68e4c159566103c144c6b3b72029 Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.461108 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p69tc"] Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.463696 4932 patch_prober.go:28] interesting pod/router-default-5444994796-8xrbm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:36:27 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Feb 18 19:36:27 crc kubenswrapper[4932]: [+]process-running ok Feb 18 19:36:27 crc kubenswrapper[4932]: healthz check failed Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.463730 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8xrbm" podUID="522d227a-c827-415e-9e8b-e5907ba83363" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:27 crc kubenswrapper[4932]: W0218 19:36:27.520971 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81ac7afd_2261_4af0_9b59_f18c98424c21.slice/crio-c390b6f5bfce7b21488ea351096dff0534a3fb41e4e604cf85b8536016c29379 WatchSource:0}: Error finding container c390b6f5bfce7b21488ea351096dff0534a3fb41e4e604cf85b8536016c29379: Status 404 returned error can't find the container with id c390b6f5bfce7b21488ea351096dff0534a3fb41e4e604cf85b8536016c29379 Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.606626 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.606696 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.622251 4932 patch_prober.go:28] interesting pod/downloads-7954f5f757-cn2nc container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.622330 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-cn2nc" podUID="d75d91b3-7800-4645-b272-768f9d02f81b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.622682 4932 patch_prober.go:28] interesting pod/downloads-7954f5f757-cn2nc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.622733 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cn2nc" podUID="d75d91b3-7800-4645-b272-768f9d02f81b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.996428 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" event={"ID":"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22","Type":"ContainerStarted","Data":"977e5ae481b66aa029660ffb648170ce708da73076563e53e12c31c8a6b9455c"} Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.996834 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" event={"ID":"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22","Type":"ContainerStarted","Data":"e2cd6e9fe7b91c0ea246bc59cf9d11b75cc0eb7a103b52573fd6adf6936ac914"} Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.996866 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.998277 4932 generic.go:334] "Generic (PLEG): container finished" podID="cafe1e82-ef19-4345-825e-cc9bf016b353" containerID="b1ab147c23c564a23da14f52dedbbfb0b71ab40cc857242937d70043e546697f" exitCode=0 Feb 18 19:36:27 crc kubenswrapper[4932]: I0218 19:36:27.998354 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qvwc8" event={"ID":"cafe1e82-ef19-4345-825e-cc9bf016b353","Type":"ContainerDied","Data":"b1ab147c23c564a23da14f52dedbbfb0b71ab40cc857242937d70043e546697f"} Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.000117 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.000435 4932 generic.go:334] "Generic (PLEG): container finished" podID="62bbf001-ce57-471f-ad28-1d892d0d30e9" containerID="6d3ff69895d4bcdcf15d410bfbcd335c0b79b07284d7d99d33d18f064ce3f033" exitCode=0 Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.000503 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2xgw" event={"ID":"62bbf001-ce57-471f-ad28-1d892d0d30e9","Type":"ContainerDied","Data":"6d3ff69895d4bcdcf15d410bfbcd335c0b79b07284d7d99d33d18f064ce3f033"} Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.000524 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2xgw" event={"ID":"62bbf001-ce57-471f-ad28-1d892d0d30e9","Type":"ContainerStarted","Data":"598a3819cd069f787a558e804a3b29d8f39ee54c7fd7148d56ad085f056a9d34"} Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.003154 4932 generic.go:334] "Generic (PLEG): container finished" podID="81ac7afd-2261-4af0-9b59-f18c98424c21" containerID="1b0d2b2ae55ed4f436ccb38016238e82fe7b466707fcaf883296c2a76ea39547" exitCode=0 Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.003268 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p69tc" event={"ID":"81ac7afd-2261-4af0-9b59-f18c98424c21","Type":"ContainerDied","Data":"1b0d2b2ae55ed4f436ccb38016238e82fe7b466707fcaf883296c2a76ea39547"} Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.003327 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p69tc" event={"ID":"81ac7afd-2261-4af0-9b59-f18c98424c21","Type":"ContainerStarted","Data":"c390b6f5bfce7b21488ea351096dff0534a3fb41e4e604cf85b8536016c29379"} Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.005329 4932 generic.go:334] "Generic (PLEG): container finished" podID="29a4229b-f53b-4cd7-b81b-7fc2dfded045" containerID="a179399980f04575a54a9f03e1d915317fc692a4024caf7d2c1e735e4fe4a0f3" exitCode=0 Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.005373 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbkr8" event={"ID":"29a4229b-f53b-4cd7-b81b-7fc2dfded045","Type":"ContainerDied","Data":"a179399980f04575a54a9f03e1d915317fc692a4024caf7d2c1e735e4fe4a0f3"} Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.005406 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbkr8" event={"ID":"29a4229b-f53b-4cd7-b81b-7fc2dfded045","Type":"ContainerStarted","Data":"a2c366de25c0453f7a2db8d06c018b6056eb68e4c159566103c144c6b3b72029"} Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.027166 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" podStartSLOduration=128.027143651 podStartE2EDuration="2m8.027143651s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:28.024375819 +0000 UTC m=+151.606330684" watchObservedRunningTime="2026-02-18 19:36:28.027143651 +0000 UTC m=+151.609098506" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.290486 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.290532 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.292620 4932 patch_prober.go:28] interesting pod/console-f9d7485db-fgjll container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.292723 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-fgjll" podUID="f9f46b79-f300-42de-a2c3-a35670822a3b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.315225 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.323025 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-jr49c" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.369103 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.397760 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z2jc5" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.462124 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.465365 4932 patch_prober.go:28] interesting pod/router-default-5444994796-8xrbm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:36:28 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Feb 18 19:36:28 crc kubenswrapper[4932]: [+]process-running ok Feb 18 19:36:28 crc kubenswrapper[4932]: healthz check failed Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.465408 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8xrbm" podUID="522d227a-c827-415e-9e8b-e5907ba83363" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.536647 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4w2tj"] Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.537585 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4w2tj" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.541725 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.560039 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4w2tj"] Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.588225 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lttp\" (UniqueName: \"kubernetes.io/projected/b77a623a-ff2e-45aa-9004-b211b0200a3f-kube-api-access-7lttp\") pod \"redhat-marketplace-4w2tj\" (UID: \"b77a623a-ff2e-45aa-9004-b211b0200a3f\") " pod="openshift-marketplace/redhat-marketplace-4w2tj" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.588279 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b77a623a-ff2e-45aa-9004-b211b0200a3f-catalog-content\") pod \"redhat-marketplace-4w2tj\" (UID: \"b77a623a-ff2e-45aa-9004-b211b0200a3f\") " pod="openshift-marketplace/redhat-marketplace-4w2tj" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.588367 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b77a623a-ff2e-45aa-9004-b211b0200a3f-utilities\") pod \"redhat-marketplace-4w2tj\" (UID: \"b77a623a-ff2e-45aa-9004-b211b0200a3f\") " pod="openshift-marketplace/redhat-marketplace-4w2tj" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.689489 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lttp\" (UniqueName: \"kubernetes.io/projected/b77a623a-ff2e-45aa-9004-b211b0200a3f-kube-api-access-7lttp\") pod \"redhat-marketplace-4w2tj\" (UID: \"b77a623a-ff2e-45aa-9004-b211b0200a3f\") " pod="openshift-marketplace/redhat-marketplace-4w2tj" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.689538 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b77a623a-ff2e-45aa-9004-b211b0200a3f-catalog-content\") pod \"redhat-marketplace-4w2tj\" (UID: \"b77a623a-ff2e-45aa-9004-b211b0200a3f\") " pod="openshift-marketplace/redhat-marketplace-4w2tj" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.689620 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b77a623a-ff2e-45aa-9004-b211b0200a3f-utilities\") pod \"redhat-marketplace-4w2tj\" (UID: \"b77a623a-ff2e-45aa-9004-b211b0200a3f\") " pod="openshift-marketplace/redhat-marketplace-4w2tj" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.690091 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b77a623a-ff2e-45aa-9004-b211b0200a3f-utilities\") pod \"redhat-marketplace-4w2tj\" (UID: \"b77a623a-ff2e-45aa-9004-b211b0200a3f\") " pod="openshift-marketplace/redhat-marketplace-4w2tj" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.690657 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b77a623a-ff2e-45aa-9004-b211b0200a3f-catalog-content\") pod \"redhat-marketplace-4w2tj\" (UID: \"b77a623a-ff2e-45aa-9004-b211b0200a3f\") " pod="openshift-marketplace/redhat-marketplace-4w2tj" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.722201 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lttp\" (UniqueName: \"kubernetes.io/projected/b77a623a-ff2e-45aa-9004-b211b0200a3f-kube-api-access-7lttp\") pod \"redhat-marketplace-4w2tj\" (UID: \"b77a623a-ff2e-45aa-9004-b211b0200a3f\") " pod="openshift-marketplace/redhat-marketplace-4w2tj" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.853582 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4w2tj" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.857779 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.946443 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vwwjl"] Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.948512 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwwjl" Feb 18 19:36:28 crc kubenswrapper[4932]: I0218 19:36:28.953628 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwwjl"] Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.001430 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-utilities\") pod \"redhat-marketplace-vwwjl\" (UID: \"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a\") " pod="openshift-marketplace/redhat-marketplace-vwwjl" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.001467 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-522zn\" (UniqueName: \"kubernetes.io/projected/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-kube-api-access-522zn\") pod \"redhat-marketplace-vwwjl\" (UID: \"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a\") " pod="openshift-marketplace/redhat-marketplace-vwwjl" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.001663 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-catalog-content\") pod \"redhat-marketplace-vwwjl\" (UID: \"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a\") " pod="openshift-marketplace/redhat-marketplace-vwwjl" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.102417 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-catalog-content\") pod \"redhat-marketplace-vwwjl\" (UID: \"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a\") " pod="openshift-marketplace/redhat-marketplace-vwwjl" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.102510 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-522zn\" (UniqueName: \"kubernetes.io/projected/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-kube-api-access-522zn\") pod \"redhat-marketplace-vwwjl\" (UID: \"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a\") " pod="openshift-marketplace/redhat-marketplace-vwwjl" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.102526 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-utilities\") pod \"redhat-marketplace-vwwjl\" (UID: \"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a\") " pod="openshift-marketplace/redhat-marketplace-vwwjl" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.105969 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-catalog-content\") pod \"redhat-marketplace-vwwjl\" (UID: \"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a\") " pod="openshift-marketplace/redhat-marketplace-vwwjl" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.108053 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-utilities\") pod \"redhat-marketplace-vwwjl\" (UID: \"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a\") " pod="openshift-marketplace/redhat-marketplace-vwwjl" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.135704 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.136754 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.148565 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.150571 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-522zn\" (UniqueName: \"kubernetes.io/projected/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-kube-api-access-522zn\") pod \"redhat-marketplace-vwwjl\" (UID: \"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a\") " pod="openshift-marketplace/redhat-marketplace-vwwjl" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.150581 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.150682 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.213070 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5202101c-f325-4956-a53c-f6b5663ad5cc-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5202101c-f325-4956-a53c-f6b5663ad5cc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.213262 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5202101c-f325-4956-a53c-f6b5663ad5cc-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5202101c-f325-4956-a53c-f6b5663ad5cc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.284419 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4w2tj"] Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.293946 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwwjl" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.313607 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5202101c-f325-4956-a53c-f6b5663ad5cc-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5202101c-f325-4956-a53c-f6b5663ad5cc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.313640 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5202101c-f325-4956-a53c-f6b5663ad5cc-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5202101c-f325-4956-a53c-f6b5663ad5cc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.313735 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5202101c-f325-4956-a53c-f6b5663ad5cc-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5202101c-f325-4956-a53c-f6b5663ad5cc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.338671 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5202101c-f325-4956-a53c-f6b5663ad5cc-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5202101c-f325-4956-a53c-f6b5663ad5cc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.469814 4932 patch_prober.go:28] interesting pod/router-default-5444994796-8xrbm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:36:29 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Feb 18 19:36:29 crc kubenswrapper[4932]: [+]process-running ok Feb 18 19:36:29 crc kubenswrapper[4932]: healthz check failed Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.469881 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8xrbm" podUID="522d227a-c827-415e-9e8b-e5907ba83363" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.496877 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.538433 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-chh8j"] Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.540442 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chh8j" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.541919 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.600317 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-chh8j"] Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.717401 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce921030-ec82-420d-a9e7-cd04ee7e055b-utilities\") pod \"redhat-operators-chh8j\" (UID: \"ce921030-ec82-420d-a9e7-cd04ee7e055b\") " pod="openshift-marketplace/redhat-operators-chh8j" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.717477 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce921030-ec82-420d-a9e7-cd04ee7e055b-catalog-content\") pod \"redhat-operators-chh8j\" (UID: \"ce921030-ec82-420d-a9e7-cd04ee7e055b\") " pod="openshift-marketplace/redhat-operators-chh8j" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.717511 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc64x\" (UniqueName: \"kubernetes.io/projected/ce921030-ec82-420d-a9e7-cd04ee7e055b-kube-api-access-fc64x\") pod \"redhat-operators-chh8j\" (UID: \"ce921030-ec82-420d-a9e7-cd04ee7e055b\") " pod="openshift-marketplace/redhat-operators-chh8j" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.818674 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce921030-ec82-420d-a9e7-cd04ee7e055b-catalog-content\") pod \"redhat-operators-chh8j\" (UID: \"ce921030-ec82-420d-a9e7-cd04ee7e055b\") " pod="openshift-marketplace/redhat-operators-chh8j" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.818738 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc64x\" (UniqueName: \"kubernetes.io/projected/ce921030-ec82-420d-a9e7-cd04ee7e055b-kube-api-access-fc64x\") pod \"redhat-operators-chh8j\" (UID: \"ce921030-ec82-420d-a9e7-cd04ee7e055b\") " pod="openshift-marketplace/redhat-operators-chh8j" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.818787 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce921030-ec82-420d-a9e7-cd04ee7e055b-utilities\") pod \"redhat-operators-chh8j\" (UID: \"ce921030-ec82-420d-a9e7-cd04ee7e055b\") " pod="openshift-marketplace/redhat-operators-chh8j" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.819268 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce921030-ec82-420d-a9e7-cd04ee7e055b-utilities\") pod \"redhat-operators-chh8j\" (UID: \"ce921030-ec82-420d-a9e7-cd04ee7e055b\") " pod="openshift-marketplace/redhat-operators-chh8j" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.858252 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce921030-ec82-420d-a9e7-cd04ee7e055b-catalog-content\") pod \"redhat-operators-chh8j\" (UID: \"ce921030-ec82-420d-a9e7-cd04ee7e055b\") " pod="openshift-marketplace/redhat-operators-chh8j" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.858040 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc64x\" (UniqueName: \"kubernetes.io/projected/ce921030-ec82-420d-a9e7-cd04ee7e055b-kube-api-access-fc64x\") pod \"redhat-operators-chh8j\" (UID: \"ce921030-ec82-420d-a9e7-cd04ee7e055b\") " pod="openshift-marketplace/redhat-operators-chh8j" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.915590 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chh8j" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.936770 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-78d5s"] Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.938233 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-78d5s" Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.946110 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-78d5s"] Feb 18 19:36:29 crc kubenswrapper[4932]: I0218 19:36:29.958626 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwwjl"] Feb 18 19:36:30 crc kubenswrapper[4932]: I0218 19:36:30.029874 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4w2tj" event={"ID":"b77a623a-ff2e-45aa-9004-b211b0200a3f","Type":"ContainerStarted","Data":"79bf00f2e14eaea6ac861e5d5414045b4e7af7c9494be58a0ddf97f7bbd0066e"} Feb 18 19:36:30 crc kubenswrapper[4932]: I0218 19:36:30.031970 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwwjl" event={"ID":"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a","Type":"ContainerStarted","Data":"aa8524bb79cb00bc572889b14100dbb8df53c65222c30b9f755ec3035f0dbea0"} Feb 18 19:36:30 crc kubenswrapper[4932]: I0218 19:36:30.059527 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 19:36:30 crc kubenswrapper[4932]: I0218 19:36:30.121644 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-utilities\") pod \"redhat-operators-78d5s\" (UID: \"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75\") " pod="openshift-marketplace/redhat-operators-78d5s" Feb 18 19:36:30 crc kubenswrapper[4932]: I0218 19:36:30.121723 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-catalog-content\") pod \"redhat-operators-78d5s\" (UID: \"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75\") " pod="openshift-marketplace/redhat-operators-78d5s" Feb 18 19:36:30 crc kubenswrapper[4932]: I0218 19:36:30.121778 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5ks5\" (UniqueName: \"kubernetes.io/projected/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-kube-api-access-h5ks5\") pod \"redhat-operators-78d5s\" (UID: \"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75\") " pod="openshift-marketplace/redhat-operators-78d5s" Feb 18 19:36:30 crc kubenswrapper[4932]: I0218 19:36:30.224266 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-catalog-content\") pod \"redhat-operators-78d5s\" (UID: \"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75\") " pod="openshift-marketplace/redhat-operators-78d5s" Feb 18 19:36:30 crc kubenswrapper[4932]: I0218 19:36:30.224814 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5ks5\" (UniqueName: \"kubernetes.io/projected/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-kube-api-access-h5ks5\") pod \"redhat-operators-78d5s\" (UID: \"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75\") " pod="openshift-marketplace/redhat-operators-78d5s" Feb 18 19:36:30 crc kubenswrapper[4932]: I0218 19:36:30.224947 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-utilities\") pod \"redhat-operators-78d5s\" (UID: \"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75\") " pod="openshift-marketplace/redhat-operators-78d5s" Feb 18 19:36:30 crc kubenswrapper[4932]: I0218 19:36:30.226903 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-utilities\") pod \"redhat-operators-78d5s\" (UID: \"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75\") " pod="openshift-marketplace/redhat-operators-78d5s" Feb 18 19:36:30 crc kubenswrapper[4932]: I0218 19:36:30.227336 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-catalog-content\") pod \"redhat-operators-78d5s\" (UID: \"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75\") " pod="openshift-marketplace/redhat-operators-78d5s" Feb 18 19:36:30 crc kubenswrapper[4932]: I0218 19:36:30.255035 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5ks5\" (UniqueName: \"kubernetes.io/projected/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-kube-api-access-h5ks5\") pod \"redhat-operators-78d5s\" (UID: \"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75\") " pod="openshift-marketplace/redhat-operators-78d5s" Feb 18 19:36:30 crc kubenswrapper[4932]: I0218 19:36:30.265205 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-78d5s" Feb 18 19:36:30 crc kubenswrapper[4932]: I0218 19:36:30.288774 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-chh8j"] Feb 18 19:36:30 crc kubenswrapper[4932]: I0218 19:36:30.468619 4932 patch_prober.go:28] interesting pod/router-default-5444994796-8xrbm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:36:30 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Feb 18 19:36:30 crc kubenswrapper[4932]: [+]process-running ok Feb 18 19:36:30 crc kubenswrapper[4932]: healthz check failed Feb 18 19:36:30 crc kubenswrapper[4932]: I0218 19:36:30.468688 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8xrbm" podUID="522d227a-c827-415e-9e8b-e5907ba83363" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:30 crc kubenswrapper[4932]: I0218 19:36:30.659352 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-78d5s"] Feb 18 19:36:30 crc kubenswrapper[4932]: W0218 19:36:30.678729 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2483e7fb_5cc5_4715_8eea_fd5cf6b31d75.slice/crio-d4b4e12432d81a20c3a5774755df782409b7a4c04cd3667ffe8f283572befe4d WatchSource:0}: Error finding container d4b4e12432d81a20c3a5774755df782409b7a4c04cd3667ffe8f283572befe4d: Status 404 returned error can't find the container with id d4b4e12432d81a20c3a5774755df782409b7a4c04cd3667ffe8f283572befe4d Feb 18 19:36:31 crc kubenswrapper[4932]: I0218 19:36:31.060384 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5202101c-f325-4956-a53c-f6b5663ad5cc","Type":"ContainerStarted","Data":"1e7d7bb277500c87441c43a9dbcbe843235a8108c8af31def1f0b1876f3703b9"} Feb 18 19:36:31 crc kubenswrapper[4932]: I0218 19:36:31.064972 4932 generic.go:334] "Generic (PLEG): container finished" podID="b77a623a-ff2e-45aa-9004-b211b0200a3f" containerID="a6fd3575dcddfe36fd8dfcc8e6bcb0f7035ca23b01b700d078f298418c1896e8" exitCode=0 Feb 18 19:36:31 crc kubenswrapper[4932]: I0218 19:36:31.065543 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4w2tj" event={"ID":"b77a623a-ff2e-45aa-9004-b211b0200a3f","Type":"ContainerDied","Data":"a6fd3575dcddfe36fd8dfcc8e6bcb0f7035ca23b01b700d078f298418c1896e8"} Feb 18 19:36:31 crc kubenswrapper[4932]: I0218 19:36:31.090535 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwwjl" event={"ID":"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a","Type":"ContainerStarted","Data":"edc99053853d8154c316239909de23e9353814ed8ddb7cbf8894ddd6152c03bb"} Feb 18 19:36:31 crc kubenswrapper[4932]: I0218 19:36:31.098655 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chh8j" event={"ID":"ce921030-ec82-420d-a9e7-cd04ee7e055b","Type":"ContainerStarted","Data":"df158c2125177f92039a79a6401f4bb6f7b2c14373fe74c537b86d94e6f1ab0e"} Feb 18 19:36:31 crc kubenswrapper[4932]: I0218 19:36:31.110271 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78d5s" event={"ID":"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75","Type":"ContainerStarted","Data":"d4b4e12432d81a20c3a5774755df782409b7a4c04cd3667ffe8f283572befe4d"} Feb 18 19:36:31 crc kubenswrapper[4932]: I0218 19:36:31.466361 4932 patch_prober.go:28] interesting pod/router-default-5444994796-8xrbm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:36:31 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Feb 18 19:36:31 crc kubenswrapper[4932]: [+]process-running ok Feb 18 19:36:31 crc kubenswrapper[4932]: healthz check failed Feb 18 19:36:31 crc kubenswrapper[4932]: I0218 19:36:31.466582 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8xrbm" podUID="522d227a-c827-415e-9e8b-e5907ba83363" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.125152 4932 generic.go:334] "Generic (PLEG): container finished" podID="83fa5ba7-c2d8-4d68-839f-ba2f4cad568a" containerID="edc99053853d8154c316239909de23e9353814ed8ddb7cbf8894ddd6152c03bb" exitCode=0 Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.125210 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwwjl" event={"ID":"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a","Type":"ContainerDied","Data":"edc99053853d8154c316239909de23e9353814ed8ddb7cbf8894ddd6152c03bb"} Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.129143 4932 generic.go:334] "Generic (PLEG): container finished" podID="ce921030-ec82-420d-a9e7-cd04ee7e055b" containerID="bda935338a806285152d3571a5562901d0dc27851a41082e686230cc48a54915" exitCode=0 Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.129181 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chh8j" event={"ID":"ce921030-ec82-420d-a9e7-cd04ee7e055b","Type":"ContainerDied","Data":"bda935338a806285152d3571a5562901d0dc27851a41082e686230cc48a54915"} Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.134157 4932 generic.go:334] "Generic (PLEG): container finished" podID="2483e7fb-5cc5-4715-8eea-fd5cf6b31d75" containerID="4c7dbb9b6882a64794c308943b5dbc3158b29679534b8c6911e13ecd82366648" exitCode=0 Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.134274 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78d5s" event={"ID":"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75","Type":"ContainerDied","Data":"4c7dbb9b6882a64794c308943b5dbc3158b29679534b8c6911e13ecd82366648"} Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.136391 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5202101c-f325-4956-a53c-f6b5663ad5cc","Type":"ContainerStarted","Data":"0ca1a22601dd73aec8e8f8c77febf4c85644399d7e326c0c391167e64c8df9c5"} Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.165552 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.165535597 podStartE2EDuration="3.165535597s" podCreationTimestamp="2026-02-18 19:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:32.162624792 +0000 UTC m=+155.744579637" watchObservedRunningTime="2026-02-18 19:36:32.165535597 +0000 UTC m=+155.747490442" Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.487060 4932 patch_prober.go:28] interesting pod/router-default-5444994796-8xrbm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:36:32 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Feb 18 19:36:32 crc kubenswrapper[4932]: [+]process-running ok Feb 18 19:36:32 crc kubenswrapper[4932]: healthz check failed Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.487148 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8xrbm" podUID="522d227a-c827-415e-9e8b-e5907ba83363" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.579531 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.580215 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.584548 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.584699 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.586324 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fbdc287c-8b65-4c46-8697-8af76f3cae17-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"fbdc287c-8b65-4c46-8697-8af76f3cae17\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.586367 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fbdc287c-8b65-4c46-8697-8af76f3cae17-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"fbdc287c-8b65-4c46-8697-8af76f3cae17\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.588802 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.690044 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fbdc287c-8b65-4c46-8697-8af76f3cae17-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"fbdc287c-8b65-4c46-8697-8af76f3cae17\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.690124 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fbdc287c-8b65-4c46-8697-8af76f3cae17-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"fbdc287c-8b65-4c46-8697-8af76f3cae17\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.690249 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fbdc287c-8b65-4c46-8697-8af76f3cae17-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"fbdc287c-8b65-4c46-8697-8af76f3cae17\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.723024 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fbdc287c-8b65-4c46-8697-8af76f3cae17-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"fbdc287c-8b65-4c46-8697-8af76f3cae17\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:36:32 crc kubenswrapper[4932]: I0218 19:36:32.913480 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:36:33 crc kubenswrapper[4932]: I0218 19:36:33.161063 4932 generic.go:334] "Generic (PLEG): container finished" podID="5202101c-f325-4956-a53c-f6b5663ad5cc" containerID="0ca1a22601dd73aec8e8f8c77febf4c85644399d7e326c0c391167e64c8df9c5" exitCode=0 Feb 18 19:36:33 crc kubenswrapper[4932]: I0218 19:36:33.161100 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5202101c-f325-4956-a53c-f6b5663ad5cc","Type":"ContainerDied","Data":"0ca1a22601dd73aec8e8f8c77febf4c85644399d7e326c0c391167e64c8df9c5"} Feb 18 19:36:33 crc kubenswrapper[4932]: I0218 19:36:33.365115 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 19:36:33 crc kubenswrapper[4932]: I0218 19:36:33.464084 4932 patch_prober.go:28] interesting pod/router-default-5444994796-8xrbm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:36:33 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Feb 18 19:36:33 crc kubenswrapper[4932]: [+]process-running ok Feb 18 19:36:33 crc kubenswrapper[4932]: healthz check failed Feb 18 19:36:33 crc kubenswrapper[4932]: I0218 19:36:33.466590 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8xrbm" podUID="522d227a-c827-415e-9e8b-e5907ba83363" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:33 crc kubenswrapper[4932]: I0218 19:36:33.893972 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-nqdfv" Feb 18 19:36:34 crc kubenswrapper[4932]: I0218 19:36:34.178611 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fbdc287c-8b65-4c46-8697-8af76f3cae17","Type":"ContainerStarted","Data":"e194023dbd4a48ad09c7343255eaa7721883c6d6ff199ee4dae4cf21de130a3d"} Feb 18 19:36:34 crc kubenswrapper[4932]: I0218 19:36:34.178679 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fbdc287c-8b65-4c46-8697-8af76f3cae17","Type":"ContainerStarted","Data":"cb8ac628b41bce3099321ef5f45cf31fa63dc85d80a600c0ec1b0aff786fa67b"} Feb 18 19:36:34 crc kubenswrapper[4932]: I0218 19:36:34.206017 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.2059984 podStartE2EDuration="2.2059984s" podCreationTimestamp="2026-02-18 19:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:34.204313662 +0000 UTC m=+157.786268517" watchObservedRunningTime="2026-02-18 19:36:34.2059984 +0000 UTC m=+157.787953245" Feb 18 19:36:34 crc kubenswrapper[4932]: I0218 19:36:34.464956 4932 patch_prober.go:28] interesting pod/router-default-5444994796-8xrbm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:36:34 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Feb 18 19:36:34 crc kubenswrapper[4932]: [+]process-running ok Feb 18 19:36:34 crc kubenswrapper[4932]: healthz check failed Feb 18 19:36:34 crc kubenswrapper[4932]: I0218 19:36:34.465019 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8xrbm" podUID="522d227a-c827-415e-9e8b-e5907ba83363" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:35 crc kubenswrapper[4932]: I0218 19:36:35.192470 4932 generic.go:334] "Generic (PLEG): container finished" podID="fbdc287c-8b65-4c46-8697-8af76f3cae17" containerID="e194023dbd4a48ad09c7343255eaa7721883c6d6ff199ee4dae4cf21de130a3d" exitCode=0 Feb 18 19:36:35 crc kubenswrapper[4932]: I0218 19:36:35.201578 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fbdc287c-8b65-4c46-8697-8af76f3cae17","Type":"ContainerDied","Data":"e194023dbd4a48ad09c7343255eaa7721883c6d6ff199ee4dae4cf21de130a3d"} Feb 18 19:36:35 crc kubenswrapper[4932]: I0218 19:36:35.464296 4932 patch_prober.go:28] interesting pod/router-default-5444994796-8xrbm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:36:35 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Feb 18 19:36:35 crc kubenswrapper[4932]: [+]process-running ok Feb 18 19:36:35 crc kubenswrapper[4932]: healthz check failed Feb 18 19:36:35 crc kubenswrapper[4932]: I0218 19:36:35.464370 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8xrbm" podUID="522d227a-c827-415e-9e8b-e5907ba83363" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:35 crc kubenswrapper[4932]: I0218 19:36:35.931421 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:36:36 crc kubenswrapper[4932]: I0218 19:36:36.464759 4932 patch_prober.go:28] interesting pod/router-default-5444994796-8xrbm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:36:36 crc kubenswrapper[4932]: [-]has-synced failed: reason withheld Feb 18 19:36:36 crc kubenswrapper[4932]: [+]process-running ok Feb 18 19:36:36 crc kubenswrapper[4932]: healthz check failed Feb 18 19:36:36 crc kubenswrapper[4932]: I0218 19:36:36.465050 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-8xrbm" podUID="522d227a-c827-415e-9e8b-e5907ba83363" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:36:37 crc kubenswrapper[4932]: I0218 19:36:37.493945 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:37 crc kubenswrapper[4932]: I0218 19:36:37.509393 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-8xrbm" Feb 18 19:36:37 crc kubenswrapper[4932]: I0218 19:36:37.621825 4932 patch_prober.go:28] interesting pod/downloads-7954f5f757-cn2nc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Feb 18 19:36:37 crc kubenswrapper[4932]: I0218 19:36:37.621882 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cn2nc" podUID="d75d91b3-7800-4645-b272-768f9d02f81b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Feb 18 19:36:37 crc kubenswrapper[4932]: I0218 19:36:37.622117 4932 patch_prober.go:28] interesting pod/downloads-7954f5f757-cn2nc container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Feb 18 19:36:37 crc kubenswrapper[4932]: I0218 19:36:37.622323 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-cn2nc" podUID="d75d91b3-7800-4645-b272-768f9d02f81b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Feb 18 19:36:38 crc kubenswrapper[4932]: I0218 19:36:38.291085 4932 patch_prober.go:28] interesting pod/console-f9d7485db-fgjll container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 18 19:36:38 crc kubenswrapper[4932]: I0218 19:36:38.291538 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-fgjll" podUID="f9f46b79-f300-42de-a2c3-a35670822a3b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 18 19:36:39 crc kubenswrapper[4932]: I0218 19:36:39.773279 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gkgsj"] Feb 18 19:36:39 crc kubenswrapper[4932]: I0218 19:36:39.773536 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" podUID="18e44919-11c5-4974-9c71-ff803e668247" containerName="controller-manager" containerID="cri-o://92c4e7fb68e8f7dfb6986ed0cee4d733efb5ba5235fa8329b6cb5754629a9a84" gracePeriod=30 Feb 18 19:36:39 crc kubenswrapper[4932]: I0218 19:36:39.798395 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q"] Feb 18 19:36:39 crc kubenswrapper[4932]: I0218 19:36:39.798758 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" podUID="28fd23a7-1b44-440f-be4a-8c236cf8902b" containerName="route-controller-manager" containerID="cri-o://34ae58b97ea4a3420f81b7dbc9be8a4d3eb79fc358a2e2df9dc60f04b8d15203" gracePeriod=30 Feb 18 19:36:40 crc kubenswrapper[4932]: I0218 19:36:40.222638 4932 generic.go:334] "Generic (PLEG): container finished" podID="28fd23a7-1b44-440f-be4a-8c236cf8902b" containerID="34ae58b97ea4a3420f81b7dbc9be8a4d3eb79fc358a2e2df9dc60f04b8d15203" exitCode=0 Feb 18 19:36:40 crc kubenswrapper[4932]: I0218 19:36:40.222781 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" event={"ID":"28fd23a7-1b44-440f-be4a-8c236cf8902b","Type":"ContainerDied","Data":"34ae58b97ea4a3420f81b7dbc9be8a4d3eb79fc358a2e2df9dc60f04b8d15203"} Feb 18 19:36:40 crc kubenswrapper[4932]: I0218 19:36:40.560034 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:36:40 crc kubenswrapper[4932]: I0218 19:36:40.565754 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:36:40 crc kubenswrapper[4932]: I0218 19:36:40.714410 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5202101c-f325-4956-a53c-f6b5663ad5cc-kubelet-dir\") pod \"5202101c-f325-4956-a53c-f6b5663ad5cc\" (UID: \"5202101c-f325-4956-a53c-f6b5663ad5cc\") " Feb 18 19:36:40 crc kubenswrapper[4932]: I0218 19:36:40.714484 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fbdc287c-8b65-4c46-8697-8af76f3cae17-kube-api-access\") pod \"fbdc287c-8b65-4c46-8697-8af76f3cae17\" (UID: \"fbdc287c-8b65-4c46-8697-8af76f3cae17\") " Feb 18 19:36:40 crc kubenswrapper[4932]: I0218 19:36:40.714508 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fbdc287c-8b65-4c46-8697-8af76f3cae17-kubelet-dir\") pod \"fbdc287c-8b65-4c46-8697-8af76f3cae17\" (UID: \"fbdc287c-8b65-4c46-8697-8af76f3cae17\") " Feb 18 19:36:40 crc kubenswrapper[4932]: I0218 19:36:40.714611 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5202101c-f325-4956-a53c-f6b5663ad5cc-kube-api-access\") pod \"5202101c-f325-4956-a53c-f6b5663ad5cc\" (UID: \"5202101c-f325-4956-a53c-f6b5663ad5cc\") " Feb 18 19:36:40 crc kubenswrapper[4932]: I0218 19:36:40.714613 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5202101c-f325-4956-a53c-f6b5663ad5cc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5202101c-f325-4956-a53c-f6b5663ad5cc" (UID: "5202101c-f325-4956-a53c-f6b5663ad5cc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:36:40 crc kubenswrapper[4932]: I0218 19:36:40.714709 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbdc287c-8b65-4c46-8697-8af76f3cae17-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fbdc287c-8b65-4c46-8697-8af76f3cae17" (UID: "fbdc287c-8b65-4c46-8697-8af76f3cae17"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:36:40 crc kubenswrapper[4932]: I0218 19:36:40.714881 4932 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5202101c-f325-4956-a53c-f6b5663ad5cc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:40 crc kubenswrapper[4932]: I0218 19:36:40.714902 4932 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fbdc287c-8b65-4c46-8697-8af76f3cae17-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:40 crc kubenswrapper[4932]: I0218 19:36:40.720488 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbdc287c-8b65-4c46-8697-8af76f3cae17-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fbdc287c-8b65-4c46-8697-8af76f3cae17" (UID: "fbdc287c-8b65-4c46-8697-8af76f3cae17"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:40 crc kubenswrapper[4932]: I0218 19:36:40.720795 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5202101c-f325-4956-a53c-f6b5663ad5cc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5202101c-f325-4956-a53c-f6b5663ad5cc" (UID: "5202101c-f325-4956-a53c-f6b5663ad5cc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:40 crc kubenswrapper[4932]: I0218 19:36:40.816665 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fbdc287c-8b65-4c46-8697-8af76f3cae17-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:40 crc kubenswrapper[4932]: I0218 19:36:40.816701 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5202101c-f325-4956-a53c-f6b5663ad5cc-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:41 crc kubenswrapper[4932]: I0218 19:36:41.243241 4932 generic.go:334] "Generic (PLEG): container finished" podID="18e44919-11c5-4974-9c71-ff803e668247" containerID="92c4e7fb68e8f7dfb6986ed0cee4d733efb5ba5235fa8329b6cb5754629a9a84" exitCode=0 Feb 18 19:36:41 crc kubenswrapper[4932]: I0218 19:36:41.243523 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" event={"ID":"18e44919-11c5-4974-9c71-ff803e668247","Type":"ContainerDied","Data":"92c4e7fb68e8f7dfb6986ed0cee4d733efb5ba5235fa8329b6cb5754629a9a84"} Feb 18 19:36:41 crc kubenswrapper[4932]: I0218 19:36:41.247854 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:36:41 crc kubenswrapper[4932]: I0218 19:36:41.247860 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fbdc287c-8b65-4c46-8697-8af76f3cae17","Type":"ContainerDied","Data":"cb8ac628b41bce3099321ef5f45cf31fa63dc85d80a600c0ec1b0aff786fa67b"} Feb 18 19:36:41 crc kubenswrapper[4932]: I0218 19:36:41.247896 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb8ac628b41bce3099321ef5f45cf31fa63dc85d80a600c0ec1b0aff786fa67b" Feb 18 19:36:41 crc kubenswrapper[4932]: I0218 19:36:41.251886 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5202101c-f325-4956-a53c-f6b5663ad5cc","Type":"ContainerDied","Data":"1e7d7bb277500c87441c43a9dbcbe843235a8108c8af31def1f0b1876f3703b9"} Feb 18 19:36:41 crc kubenswrapper[4932]: I0218 19:36:41.251929 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e7d7bb277500c87441c43a9dbcbe843235a8108c8af31def1f0b1876f3703b9" Feb 18 19:36:41 crc kubenswrapper[4932]: I0218 19:36:41.252013 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:36:43 crc kubenswrapper[4932]: I0218 19:36:43.256724 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs\") pod \"network-metrics-daemon-kdjbt\" (UID: \"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\") " pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:36:43 crc kubenswrapper[4932]: I0218 19:36:43.261429 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d73072e-7e9b-4ae7-92ca-5950da33ed6c-metrics-certs\") pod \"network-metrics-daemon-kdjbt\" (UID: \"1d73072e-7e9b-4ae7-92ca-5950da33ed6c\") " pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:36:43 crc kubenswrapper[4932]: I0218 19:36:43.398951 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kdjbt" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.162905 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.168165 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.184967 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28fd23a7-1b44-440f-be4a-8c236cf8902b-client-ca\") pod \"28fd23a7-1b44-440f-be4a-8c236cf8902b\" (UID: \"28fd23a7-1b44-440f-be4a-8c236cf8902b\") " Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.185089 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-config\") pod \"18e44919-11c5-4974-9c71-ff803e668247\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.185154 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-proxy-ca-bundles\") pod \"18e44919-11c5-4974-9c71-ff803e668247\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.185271 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mg62\" (UniqueName: \"kubernetes.io/projected/18e44919-11c5-4974-9c71-ff803e668247-kube-api-access-7mg62\") pod \"18e44919-11c5-4974-9c71-ff803e668247\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.185370 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18e44919-11c5-4974-9c71-ff803e668247-serving-cert\") pod \"18e44919-11c5-4974-9c71-ff803e668247\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.185501 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2lqf\" (UniqueName: \"kubernetes.io/projected/28fd23a7-1b44-440f-be4a-8c236cf8902b-kube-api-access-b2lqf\") pod \"28fd23a7-1b44-440f-be4a-8c236cf8902b\" (UID: \"28fd23a7-1b44-440f-be4a-8c236cf8902b\") " Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.185607 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28fd23a7-1b44-440f-be4a-8c236cf8902b-serving-cert\") pod \"28fd23a7-1b44-440f-be4a-8c236cf8902b\" (UID: \"28fd23a7-1b44-440f-be4a-8c236cf8902b\") " Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.185656 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-client-ca\") pod \"18e44919-11c5-4974-9c71-ff803e668247\" (UID: \"18e44919-11c5-4974-9c71-ff803e668247\") " Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.185716 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28fd23a7-1b44-440f-be4a-8c236cf8902b-config\") pod \"28fd23a7-1b44-440f-be4a-8c236cf8902b\" (UID: \"28fd23a7-1b44-440f-be4a-8c236cf8902b\") " Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.187787 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "18e44919-11c5-4974-9c71-ff803e668247" (UID: "18e44919-11c5-4974-9c71-ff803e668247"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.187799 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-client-ca" (OuterVolumeSpecName: "client-ca") pod "18e44919-11c5-4974-9c71-ff803e668247" (UID: "18e44919-11c5-4974-9c71-ff803e668247"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.188013 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28fd23a7-1b44-440f-be4a-8c236cf8902b-client-ca" (OuterVolumeSpecName: "client-ca") pod "28fd23a7-1b44-440f-be4a-8c236cf8902b" (UID: "28fd23a7-1b44-440f-be4a-8c236cf8902b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.189455 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28fd23a7-1b44-440f-be4a-8c236cf8902b-config" (OuterVolumeSpecName: "config") pod "28fd23a7-1b44-440f-be4a-8c236cf8902b" (UID: "28fd23a7-1b44-440f-be4a-8c236cf8902b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.191396 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18e44919-11c5-4974-9c71-ff803e668247-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "18e44919-11c5-4974-9c71-ff803e668247" (UID: "18e44919-11c5-4974-9c71-ff803e668247"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.194490 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-config" (OuterVolumeSpecName: "config") pod "18e44919-11c5-4974-9c71-ff803e668247" (UID: "18e44919-11c5-4974-9c71-ff803e668247"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.194605 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18e44919-11c5-4974-9c71-ff803e668247-kube-api-access-7mg62" (OuterVolumeSpecName: "kube-api-access-7mg62") pod "18e44919-11c5-4974-9c71-ff803e668247" (UID: "18e44919-11c5-4974-9c71-ff803e668247"). InnerVolumeSpecName "kube-api-access-7mg62". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.197034 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28fd23a7-1b44-440f-be4a-8c236cf8902b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "28fd23a7-1b44-440f-be4a-8c236cf8902b" (UID: "28fd23a7-1b44-440f-be4a-8c236cf8902b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.198223 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28fd23a7-1b44-440f-be4a-8c236cf8902b-kube-api-access-b2lqf" (OuterVolumeSpecName: "kube-api-access-b2lqf") pod "28fd23a7-1b44-440f-be4a-8c236cf8902b" (UID: "28fd23a7-1b44-440f-be4a-8c236cf8902b"). InnerVolumeSpecName "kube-api-access-b2lqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.200549 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc"] Feb 18 19:36:45 crc kubenswrapper[4932]: E0218 19:36:45.200726 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18e44919-11c5-4974-9c71-ff803e668247" containerName="controller-manager" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.200737 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="18e44919-11c5-4974-9c71-ff803e668247" containerName="controller-manager" Feb 18 19:36:45 crc kubenswrapper[4932]: E0218 19:36:45.200749 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5202101c-f325-4956-a53c-f6b5663ad5cc" containerName="pruner" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.200756 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="5202101c-f325-4956-a53c-f6b5663ad5cc" containerName="pruner" Feb 18 19:36:45 crc kubenswrapper[4932]: E0218 19:36:45.200765 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbdc287c-8b65-4c46-8697-8af76f3cae17" containerName="pruner" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.200770 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbdc287c-8b65-4c46-8697-8af76f3cae17" containerName="pruner" Feb 18 19:36:45 crc kubenswrapper[4932]: E0218 19:36:45.200783 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28fd23a7-1b44-440f-be4a-8c236cf8902b" containerName="route-controller-manager" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.200788 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="28fd23a7-1b44-440f-be4a-8c236cf8902b" containerName="route-controller-manager" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.200864 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="28fd23a7-1b44-440f-be4a-8c236cf8902b" containerName="route-controller-manager" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.200875 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbdc287c-8b65-4c46-8697-8af76f3cae17" containerName="pruner" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.200882 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="18e44919-11c5-4974-9c71-ff803e668247" containerName="controller-manager" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.200894 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="5202101c-f325-4956-a53c-f6b5663ad5cc" containerName="pruner" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.201243 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.207101 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc"] Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.274502 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.275101 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-gkgsj" event={"ID":"18e44919-11c5-4974-9c71-ff803e668247","Type":"ContainerDied","Data":"aa724fc4a2394799ac8478df313683148bbb44ac563a7fa7a5bf6e498abd0bc7"} Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.275158 4932 scope.go:117] "RemoveContainer" containerID="92c4e7fb68e8f7dfb6986ed0cee4d733efb5ba5235fa8329b6cb5754629a9a84" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.277393 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" event={"ID":"28fd23a7-1b44-440f-be4a-8c236cf8902b","Type":"ContainerDied","Data":"7444c4d3cedc79cae24f1e017b9fa1b3385d64a4dc475008ab7f7a213fdab561"} Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.277461 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.289995 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49xsb\" (UniqueName: \"kubernetes.io/projected/195f4a7f-a008-4ca1-96d4-771758b838b9-kube-api-access-49xsb\") pod \"controller-manager-b4b65bcb8-ns9lc\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.292050 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-proxy-ca-bundles\") pod \"controller-manager-b4b65bcb8-ns9lc\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.292114 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/195f4a7f-a008-4ca1-96d4-771758b838b9-serving-cert\") pod \"controller-manager-b4b65bcb8-ns9lc\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.292156 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-config\") pod \"controller-manager-b4b65bcb8-ns9lc\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.292219 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-client-ca\") pod \"controller-manager-b4b65bcb8-ns9lc\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.292292 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28fd23a7-1b44-440f-be4a-8c236cf8902b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.292306 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.292318 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28fd23a7-1b44-440f-be4a-8c236cf8902b-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.292329 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28fd23a7-1b44-440f-be4a-8c236cf8902b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.292343 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.292353 4932 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/18e44919-11c5-4974-9c71-ff803e668247-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.292364 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mg62\" (UniqueName: \"kubernetes.io/projected/18e44919-11c5-4974-9c71-ff803e668247-kube-api-access-7mg62\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.292373 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18e44919-11c5-4974-9c71-ff803e668247-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.292386 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2lqf\" (UniqueName: \"kubernetes.io/projected/28fd23a7-1b44-440f-be4a-8c236cf8902b-kube-api-access-b2lqf\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.310882 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gkgsj"] Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.314748 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gkgsj"] Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.318093 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q"] Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.321156 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnq5q"] Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.393625 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-proxy-ca-bundles\") pod \"controller-manager-b4b65bcb8-ns9lc\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.393677 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/195f4a7f-a008-4ca1-96d4-771758b838b9-serving-cert\") pod \"controller-manager-b4b65bcb8-ns9lc\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.393709 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-config\") pod \"controller-manager-b4b65bcb8-ns9lc\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.393729 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-client-ca\") pod \"controller-manager-b4b65bcb8-ns9lc\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.393749 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49xsb\" (UniqueName: \"kubernetes.io/projected/195f4a7f-a008-4ca1-96d4-771758b838b9-kube-api-access-49xsb\") pod \"controller-manager-b4b65bcb8-ns9lc\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.394818 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-proxy-ca-bundles\") pod \"controller-manager-b4b65bcb8-ns9lc\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.394898 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-client-ca\") pod \"controller-manager-b4b65bcb8-ns9lc\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.395470 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-config\") pod \"controller-manager-b4b65bcb8-ns9lc\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.398142 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/195f4a7f-a008-4ca1-96d4-771758b838b9-serving-cert\") pod \"controller-manager-b4b65bcb8-ns9lc\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.411334 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49xsb\" (UniqueName: \"kubernetes.io/projected/195f4a7f-a008-4ca1-96d4-771758b838b9-kube-api-access-49xsb\") pod \"controller-manager-b4b65bcb8-ns9lc\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:36:45 crc kubenswrapper[4932]: I0218 19:36:45.584760 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:36:46 crc kubenswrapper[4932]: I0218 19:36:46.850293 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:36:47 crc kubenswrapper[4932]: I0218 19:36:47.190000 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18e44919-11c5-4974-9c71-ff803e668247" path="/var/lib/kubelet/pods/18e44919-11c5-4974-9c71-ff803e668247/volumes" Feb 18 19:36:47 crc kubenswrapper[4932]: I0218 19:36:47.190722 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28fd23a7-1b44-440f-be4a-8c236cf8902b" path="/var/lib/kubelet/pods/28fd23a7-1b44-440f-be4a-8c236cf8902b/volumes" Feb 18 19:36:47 crc kubenswrapper[4932]: I0218 19:36:47.627525 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-cn2nc" Feb 18 19:36:48 crc kubenswrapper[4932]: I0218 19:36:48.294318 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:48 crc kubenswrapper[4932]: I0218 19:36:48.298782 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.623615 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw"] Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.624464 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.627374 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.627420 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.627612 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.627391 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.627750 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.627856 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.628098 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw"] Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.752252 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c823406a-c4f6-4335-be43-312c5336c730-config\") pod \"route-controller-manager-68c79b7788-6k9bw\" (UID: \"c823406a-c4f6-4335-be43-312c5336c730\") " pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.752336 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b627\" (UniqueName: \"kubernetes.io/projected/c823406a-c4f6-4335-be43-312c5336c730-kube-api-access-9b627\") pod \"route-controller-manager-68c79b7788-6k9bw\" (UID: \"c823406a-c4f6-4335-be43-312c5336c730\") " pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.752377 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c823406a-c4f6-4335-be43-312c5336c730-serving-cert\") pod \"route-controller-manager-68c79b7788-6k9bw\" (UID: \"c823406a-c4f6-4335-be43-312c5336c730\") " pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.752420 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c823406a-c4f6-4335-be43-312c5336c730-client-ca\") pod \"route-controller-manager-68c79b7788-6k9bw\" (UID: \"c823406a-c4f6-4335-be43-312c5336c730\") " pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.853908 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c823406a-c4f6-4335-be43-312c5336c730-serving-cert\") pod \"route-controller-manager-68c79b7788-6k9bw\" (UID: \"c823406a-c4f6-4335-be43-312c5336c730\") " pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.854068 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c823406a-c4f6-4335-be43-312c5336c730-client-ca\") pod \"route-controller-manager-68c79b7788-6k9bw\" (UID: \"c823406a-c4f6-4335-be43-312c5336c730\") " pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.855500 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c823406a-c4f6-4335-be43-312c5336c730-client-ca\") pod \"route-controller-manager-68c79b7788-6k9bw\" (UID: \"c823406a-c4f6-4335-be43-312c5336c730\") " pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.855575 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c823406a-c4f6-4335-be43-312c5336c730-config\") pod \"route-controller-manager-68c79b7788-6k9bw\" (UID: \"c823406a-c4f6-4335-be43-312c5336c730\") " pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.855610 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b627\" (UniqueName: \"kubernetes.io/projected/c823406a-c4f6-4335-be43-312c5336c730-kube-api-access-9b627\") pod \"route-controller-manager-68c79b7788-6k9bw\" (UID: \"c823406a-c4f6-4335-be43-312c5336c730\") " pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.857301 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c823406a-c4f6-4335-be43-312c5336c730-config\") pod \"route-controller-manager-68c79b7788-6k9bw\" (UID: \"c823406a-c4f6-4335-be43-312c5336c730\") " pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.862126 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c823406a-c4f6-4335-be43-312c5336c730-serving-cert\") pod \"route-controller-manager-68c79b7788-6k9bw\" (UID: \"c823406a-c4f6-4335-be43-312c5336c730\") " pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.874025 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b627\" (UniqueName: \"kubernetes.io/projected/c823406a-c4f6-4335-be43-312c5336c730-kube-api-access-9b627\") pod \"route-controller-manager-68c79b7788-6k9bw\" (UID: \"c823406a-c4f6-4335-be43-312c5336c730\") " pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" Feb 18 19:36:49 crc kubenswrapper[4932]: I0218 19:36:49.949330 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" Feb 18 19:36:53 crc kubenswrapper[4932]: E0218 19:36:53.109517 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 19:36:53 crc kubenswrapper[4932]: E0218 19:36:53.109710 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rgm8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-j2xgw_openshift-marketplace(62bbf001-ce57-471f-ad28-1d892d0d30e9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:36:53 crc kubenswrapper[4932]: E0218 19:36:53.110869 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-j2xgw" podUID="62bbf001-ce57-471f-ad28-1d892d0d30e9" Feb 18 19:36:55 crc kubenswrapper[4932]: I0218 19:36:55.125102 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:36:57 crc kubenswrapper[4932]: I0218 19:36:57.606814 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:36:57 crc kubenswrapper[4932]: I0218 19:36:57.606932 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:36:58 crc kubenswrapper[4932]: I0218 19:36:58.600150 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xfmpj" Feb 18 19:36:59 crc kubenswrapper[4932]: I0218 19:36:59.754892 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc"] Feb 18 19:36:59 crc kubenswrapper[4932]: I0218 19:36:59.836301 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw"] Feb 18 19:37:02 crc kubenswrapper[4932]: E0218 19:37:02.233218 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-j2xgw" podUID="62bbf001-ce57-471f-ad28-1d892d0d30e9" Feb 18 19:37:04 crc kubenswrapper[4932]: E0218 19:37:04.674588 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 18 19:37:04 crc kubenswrapper[4932]: E0218 19:37:04.674869 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6hghw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-gbkr8_openshift-marketplace(29a4229b-f53b-4cd7-b81b-7fc2dfded045): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:37:04 crc kubenswrapper[4932]: E0218 19:37:04.676213 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-gbkr8" podUID="29a4229b-f53b-4cd7-b81b-7fc2dfded045" Feb 18 19:37:04 crc kubenswrapper[4932]: E0218 19:37:04.706523 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 18 19:37:04 crc kubenswrapper[4932]: E0218 19:37:04.706761 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sr45c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qvwc8_openshift-marketplace(cafe1e82-ef19-4345-825e-cc9bf016b353): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:37:04 crc kubenswrapper[4932]: E0218 19:37:04.707961 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-qvwc8" podUID="cafe1e82-ef19-4345-825e-cc9bf016b353" Feb 18 19:37:06 crc kubenswrapper[4932]: I0218 19:37:06.781621 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 19:37:06 crc kubenswrapper[4932]: I0218 19:37:06.782682 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:37:06 crc kubenswrapper[4932]: I0218 19:37:06.785186 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 19:37:06 crc kubenswrapper[4932]: I0218 19:37:06.785279 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 19:37:06 crc kubenswrapper[4932]: I0218 19:37:06.793089 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dba97173-1fe4-4a77-acd6-ec71b7aea5b3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"dba97173-1fe4-4a77-acd6-ec71b7aea5b3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:37:06 crc kubenswrapper[4932]: I0218 19:37:06.793141 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dba97173-1fe4-4a77-acd6-ec71b7aea5b3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"dba97173-1fe4-4a77-acd6-ec71b7aea5b3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:37:06 crc kubenswrapper[4932]: I0218 19:37:06.893894 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dba97173-1fe4-4a77-acd6-ec71b7aea5b3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"dba97173-1fe4-4a77-acd6-ec71b7aea5b3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:37:06 crc kubenswrapper[4932]: I0218 19:37:06.893948 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dba97173-1fe4-4a77-acd6-ec71b7aea5b3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"dba97173-1fe4-4a77-acd6-ec71b7aea5b3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:37:06 crc kubenswrapper[4932]: I0218 19:37:06.894019 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dba97173-1fe4-4a77-acd6-ec71b7aea5b3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"dba97173-1fe4-4a77-acd6-ec71b7aea5b3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:37:06 crc kubenswrapper[4932]: I0218 19:37:06.912010 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dba97173-1fe4-4a77-acd6-ec71b7aea5b3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"dba97173-1fe4-4a77-acd6-ec71b7aea5b3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:37:07 crc kubenswrapper[4932]: I0218 19:37:07.098999 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:37:07 crc kubenswrapper[4932]: I0218 19:37:07.889531 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 19:37:07 crc kubenswrapper[4932]: E0218 19:37:07.944197 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qvwc8" podUID="cafe1e82-ef19-4345-825e-cc9bf016b353" Feb 18 19:37:07 crc kubenswrapper[4932]: E0218 19:37:07.944257 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-gbkr8" podUID="29a4229b-f53b-4cd7-b81b-7fc2dfded045" Feb 18 19:37:09 crc kubenswrapper[4932]: E0218 19:37:09.197793 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 19:37:09 crc kubenswrapper[4932]: E0218 19:37:09.198127 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7lttp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-4w2tj_openshift-marketplace(b77a623a-ff2e-45aa-9004-b211b0200a3f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:37:09 crc kubenswrapper[4932]: E0218 19:37:09.199230 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-4w2tj" podUID="b77a623a-ff2e-45aa-9004-b211b0200a3f" Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.199944 4932 scope.go:117] "RemoveContainer" containerID="34ae58b97ea4a3420f81b7dbc9be8a4d3eb79fc358a2e2df9dc60f04b8d15203" Feb 18 19:37:09 crc kubenswrapper[4932]: E0218 19:37:09.203800 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 19:37:09 crc kubenswrapper[4932]: E0218 19:37:09.203930 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-522zn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-vwwjl_openshift-marketplace(83fa5ba7-c2d8-4d68-839f-ba2f4cad568a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:37:09 crc kubenswrapper[4932]: E0218 19:37:09.205282 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-vwwjl" podUID="83fa5ba7-c2d8-4d68-839f-ba2f4cad568a" Feb 18 19:37:09 crc kubenswrapper[4932]: E0218 19:37:09.226768 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 19:37:09 crc kubenswrapper[4932]: E0218 19:37:09.226922 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h5ks5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-78d5s_openshift-marketplace(2483e7fb-5cc5-4715-8eea-fd5cf6b31d75): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:37:09 crc kubenswrapper[4932]: E0218 19:37:09.228371 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-78d5s" podUID="2483e7fb-5cc5-4715-8eea-fd5cf6b31d75" Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.442651 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc"] Feb 18 19:37:09 crc kubenswrapper[4932]: W0218 19:37:09.450222 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod195f4a7f_a008_4ca1_96d4_771758b838b9.slice/crio-2fe701e682243488073f339c700bc104b7e5d361a8ce35793b4cf53c16c9c88b WatchSource:0}: Error finding container 2fe701e682243488073f339c700bc104b7e5d361a8ce35793b4cf53c16c9c88b: Status 404 returned error can't find the container with id 2fe701e682243488073f339c700bc104b7e5d361a8ce35793b4cf53c16c9c88b Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.478394 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw"] Feb 18 19:37:09 crc kubenswrapper[4932]: W0218 19:37:09.489000 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc823406a_c4f6_4335_be43_312c5336c730.slice/crio-f39ddfc2e987d91c204f3ba288daac538653177cbb76acaa1de063014d9d24bf WatchSource:0}: Error finding container f39ddfc2e987d91c204f3ba288daac538653177cbb76acaa1de063014d9d24bf: Status 404 returned error can't find the container with id f39ddfc2e987d91c204f3ba288daac538653177cbb76acaa1de063014d9d24bf Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.723326 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kdjbt"] Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.747543 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 19:37:09 crc kubenswrapper[4932]: W0218 19:37:09.755091 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poddba97173_1fe4_4a77_acd6_ec71b7aea5b3.slice/crio-feb4a7d377ea26a35159f7d75865c2832616f9d3cb5a86adfb836dad5ee65129 WatchSource:0}: Error finding container feb4a7d377ea26a35159f7d75865c2832616f9d3cb5a86adfb836dad5ee65129: Status 404 returned error can't find the container with id feb4a7d377ea26a35159f7d75865c2832616f9d3cb5a86adfb836dad5ee65129 Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.899926 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chh8j" event={"ID":"ce921030-ec82-420d-a9e7-cd04ee7e055b","Type":"ContainerStarted","Data":"fb361fdaea379654dbc86cd68517d68e807abad8cc09c0668f73e69287045372"} Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.903107 4932 generic.go:334] "Generic (PLEG): container finished" podID="81ac7afd-2261-4af0-9b59-f18c98424c21" containerID="c6517e783f0d471966554e41d3a415905201c9c873529a22d7d5a2be4ae0b7f3" exitCode=0 Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.903201 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p69tc" event={"ID":"81ac7afd-2261-4af0-9b59-f18c98424c21","Type":"ContainerDied","Data":"c6517e783f0d471966554e41d3a415905201c9c873529a22d7d5a2be4ae0b7f3"} Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.912846 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" event={"ID":"c823406a-c4f6-4335-be43-312c5336c730","Type":"ContainerStarted","Data":"157c3f809daff20e2c2c13f1fea073d50407314de8233127cfae11a101b2727d"} Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.912886 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" event={"ID":"c823406a-c4f6-4335-be43-312c5336c730","Type":"ContainerStarted","Data":"f39ddfc2e987d91c204f3ba288daac538653177cbb76acaa1de063014d9d24bf"} Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.912984 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" podUID="c823406a-c4f6-4335-be43-312c5336c730" containerName="route-controller-manager" containerID="cri-o://157c3f809daff20e2c2c13f1fea073d50407314de8233127cfae11a101b2727d" gracePeriod=30 Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.913400 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.915752 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" event={"ID":"1d73072e-7e9b-4ae7-92ca-5950da33ed6c","Type":"ContainerStarted","Data":"e6a7284d67adc70d25e854c2aed04df089ab38032a06db87abea137d5f479fb6"} Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.917854 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"dba97173-1fe4-4a77-acd6-ec71b7aea5b3","Type":"ContainerStarted","Data":"feb4a7d377ea26a35159f7d75865c2832616f9d3cb5a86adfb836dad5ee65129"} Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.922366 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" podUID="195f4a7f-a008-4ca1-96d4-771758b838b9" containerName="controller-manager" containerID="cri-o://d38a216a82b39bb975814a991d5655202fd40044fea2187926f28e38ebb199ce" gracePeriod=30 Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.922565 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" event={"ID":"195f4a7f-a008-4ca1-96d4-771758b838b9","Type":"ContainerStarted","Data":"d38a216a82b39bb975814a991d5655202fd40044fea2187926f28e38ebb199ce"} Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.922608 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" event={"ID":"195f4a7f-a008-4ca1-96d4-771758b838b9","Type":"ContainerStarted","Data":"2fe701e682243488073f339c700bc104b7e5d361a8ce35793b4cf53c16c9c88b"} Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.923441 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:37:09 crc kubenswrapper[4932]: E0218 19:37:09.925857 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4w2tj" podUID="b77a623a-ff2e-45aa-9004-b211b0200a3f" Feb 18 19:37:09 crc kubenswrapper[4932]: E0218 19:37:09.928213 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vwwjl" podUID="83fa5ba7-c2d8-4d68-839f-ba2f4cad568a" Feb 18 19:37:09 crc kubenswrapper[4932]: E0218 19:37:09.928264 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-78d5s" podUID="2483e7fb-5cc5-4715-8eea-fd5cf6b31d75" Feb 18 19:37:09 crc kubenswrapper[4932]: I0218 19:37:09.947948 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.005949 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" podStartSLOduration=31.005925443 podStartE2EDuration="31.005925443s" podCreationTimestamp="2026-02-18 19:36:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:09.976939227 +0000 UTC m=+193.558894072" watchObservedRunningTime="2026-02-18 19:37:10.005925443 +0000 UTC m=+193.587880288" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.034778 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" podStartSLOduration=31.034752095 podStartE2EDuration="31.034752095s" podCreationTimestamp="2026-02-18 19:36:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:10.032310171 +0000 UTC m=+193.614265016" watchObservedRunningTime="2026-02-18 19:37:10.034752095 +0000 UTC m=+193.616706940" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.109139 4932 patch_prober.go:28] interesting pod/route-controller-manager-68c79b7788-6k9bw container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": read tcp 10.217.0.2:53340->10.217.0.55:8443: read: connection reset by peer" start-of-body= Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.109214 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" podUID="c823406a-c4f6-4335-be43-312c5336c730" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": read tcp 10.217.0.2:53340->10.217.0.55:8443: read: connection reset by peer" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.109680 4932 patch_prober.go:28] interesting pod/route-controller-manager-68c79b7788-6k9bw container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.109735 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" podUID="c823406a-c4f6-4335-be43-312c5336c730" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.356745 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.394263 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-648d7854bd-2rffd"] Feb 18 19:37:10 crc kubenswrapper[4932]: E0218 19:37:10.394601 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="195f4a7f-a008-4ca1-96d4-771758b838b9" containerName="controller-manager" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.394622 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="195f4a7f-a008-4ca1-96d4-771758b838b9" containerName="controller-manager" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.394771 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="195f4a7f-a008-4ca1-96d4-771758b838b9" containerName="controller-manager" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.395288 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.402425 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-648d7854bd-2rffd"] Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.498899 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-68c79b7788-6k9bw_c823406a-c4f6-4335-be43-312c5336c730/route-controller-manager/0.log" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.499256 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.499858 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-client-ca\") pod \"195f4a7f-a008-4ca1-96d4-771758b838b9\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.499928 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-config\") pod \"195f4a7f-a008-4ca1-96d4-771758b838b9\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.499980 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-proxy-ca-bundles\") pod \"195f4a7f-a008-4ca1-96d4-771758b838b9\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.500792 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "195f4a7f-a008-4ca1-96d4-771758b838b9" (UID: "195f4a7f-a008-4ca1-96d4-771758b838b9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.500855 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-client-ca" (OuterVolumeSpecName: "client-ca") pod "195f4a7f-a008-4ca1-96d4-771758b838b9" (UID: "195f4a7f-a008-4ca1-96d4-771758b838b9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.500873 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-config" (OuterVolumeSpecName: "config") pod "195f4a7f-a008-4ca1-96d4-771758b838b9" (UID: "195f4a7f-a008-4ca1-96d4-771758b838b9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.501268 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49xsb\" (UniqueName: \"kubernetes.io/projected/195f4a7f-a008-4ca1-96d4-771758b838b9-kube-api-access-49xsb\") pod \"195f4a7f-a008-4ca1-96d4-771758b838b9\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.501296 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/195f4a7f-a008-4ca1-96d4-771758b838b9-serving-cert\") pod \"195f4a7f-a008-4ca1-96d4-771758b838b9\" (UID: \"195f4a7f-a008-4ca1-96d4-771758b838b9\") " Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.501382 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-proxy-ca-bundles\") pod \"controller-manager-648d7854bd-2rffd\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.501427 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-config\") pod \"controller-manager-648d7854bd-2rffd\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.501461 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n7r6\" (UniqueName: \"kubernetes.io/projected/3f701e3b-5068-423b-ae72-2097ca900619-kube-api-access-5n7r6\") pod \"controller-manager-648d7854bd-2rffd\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.501479 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-client-ca\") pod \"controller-manager-648d7854bd-2rffd\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.501501 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f701e3b-5068-423b-ae72-2097ca900619-serving-cert\") pod \"controller-manager-648d7854bd-2rffd\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.501542 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.501552 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.501562 4932 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/195f4a7f-a008-4ca1-96d4-771758b838b9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.507619 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/195f4a7f-a008-4ca1-96d4-771758b838b9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "195f4a7f-a008-4ca1-96d4-771758b838b9" (UID: "195f4a7f-a008-4ca1-96d4-771758b838b9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.508906 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/195f4a7f-a008-4ca1-96d4-771758b838b9-kube-api-access-49xsb" (OuterVolumeSpecName: "kube-api-access-49xsb") pod "195f4a7f-a008-4ca1-96d4-771758b838b9" (UID: "195f4a7f-a008-4ca1-96d4-771758b838b9"). InnerVolumeSpecName "kube-api-access-49xsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.602626 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c823406a-c4f6-4335-be43-312c5336c730-config\") pod \"c823406a-c4f6-4335-be43-312c5336c730\" (UID: \"c823406a-c4f6-4335-be43-312c5336c730\") " Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.602679 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c823406a-c4f6-4335-be43-312c5336c730-serving-cert\") pod \"c823406a-c4f6-4335-be43-312c5336c730\" (UID: \"c823406a-c4f6-4335-be43-312c5336c730\") " Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.602702 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c823406a-c4f6-4335-be43-312c5336c730-client-ca\") pod \"c823406a-c4f6-4335-be43-312c5336c730\" (UID: \"c823406a-c4f6-4335-be43-312c5336c730\") " Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.602766 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b627\" (UniqueName: \"kubernetes.io/projected/c823406a-c4f6-4335-be43-312c5336c730-kube-api-access-9b627\") pod \"c823406a-c4f6-4335-be43-312c5336c730\" (UID: \"c823406a-c4f6-4335-be43-312c5336c730\") " Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.602919 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n7r6\" (UniqueName: \"kubernetes.io/projected/3f701e3b-5068-423b-ae72-2097ca900619-kube-api-access-5n7r6\") pod \"controller-manager-648d7854bd-2rffd\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.602949 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-client-ca\") pod \"controller-manager-648d7854bd-2rffd\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.602983 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f701e3b-5068-423b-ae72-2097ca900619-serving-cert\") pod \"controller-manager-648d7854bd-2rffd\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.603034 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-proxy-ca-bundles\") pod \"controller-manager-648d7854bd-2rffd\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.603077 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-config\") pod \"controller-manager-648d7854bd-2rffd\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.603131 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49xsb\" (UniqueName: \"kubernetes.io/projected/195f4a7f-a008-4ca1-96d4-771758b838b9-kube-api-access-49xsb\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.603151 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/195f4a7f-a008-4ca1-96d4-771758b838b9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.604049 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c823406a-c4f6-4335-be43-312c5336c730-config" (OuterVolumeSpecName: "config") pod "c823406a-c4f6-4335-be43-312c5336c730" (UID: "c823406a-c4f6-4335-be43-312c5336c730"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.604669 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-config\") pod \"controller-manager-648d7854bd-2rffd\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.605003 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c823406a-c4f6-4335-be43-312c5336c730-client-ca" (OuterVolumeSpecName: "client-ca") pod "c823406a-c4f6-4335-be43-312c5336c730" (UID: "c823406a-c4f6-4335-be43-312c5336c730"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.605376 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-client-ca\") pod \"controller-manager-648d7854bd-2rffd\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.606320 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-proxy-ca-bundles\") pod \"controller-manager-648d7854bd-2rffd\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.607581 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c823406a-c4f6-4335-be43-312c5336c730-kube-api-access-9b627" (OuterVolumeSpecName: "kube-api-access-9b627") pod "c823406a-c4f6-4335-be43-312c5336c730" (UID: "c823406a-c4f6-4335-be43-312c5336c730"). InnerVolumeSpecName "kube-api-access-9b627". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.610391 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c823406a-c4f6-4335-be43-312c5336c730-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c823406a-c4f6-4335-be43-312c5336c730" (UID: "c823406a-c4f6-4335-be43-312c5336c730"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.624473 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f701e3b-5068-423b-ae72-2097ca900619-serving-cert\") pod \"controller-manager-648d7854bd-2rffd\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.629784 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n7r6\" (UniqueName: \"kubernetes.io/projected/3f701e3b-5068-423b-ae72-2097ca900619-kube-api-access-5n7r6\") pod \"controller-manager-648d7854bd-2rffd\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.704387 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b627\" (UniqueName: \"kubernetes.io/projected/c823406a-c4f6-4335-be43-312c5336c730-kube-api-access-9b627\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.704434 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c823406a-c4f6-4335-be43-312c5336c730-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.704453 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c823406a-c4f6-4335-be43-312c5336c730-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.704472 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c823406a-c4f6-4335-be43-312c5336c730-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.799927 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.944460 4932 generic.go:334] "Generic (PLEG): container finished" podID="195f4a7f-a008-4ca1-96d4-771758b838b9" containerID="d38a216a82b39bb975814a991d5655202fd40044fea2187926f28e38ebb199ce" exitCode=0 Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.944504 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" event={"ID":"195f4a7f-a008-4ca1-96d4-771758b838b9","Type":"ContainerDied","Data":"d38a216a82b39bb975814a991d5655202fd40044fea2187926f28e38ebb199ce"} Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.944810 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" event={"ID":"195f4a7f-a008-4ca1-96d4-771758b838b9","Type":"ContainerDied","Data":"2fe701e682243488073f339c700bc104b7e5d361a8ce35793b4cf53c16c9c88b"} Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.944548 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.944834 4932 scope.go:117] "RemoveContainer" containerID="d38a216a82b39bb975814a991d5655202fd40044fea2187926f28e38ebb199ce" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.950701 4932 generic.go:334] "Generic (PLEG): container finished" podID="ce921030-ec82-420d-a9e7-cd04ee7e055b" containerID="fb361fdaea379654dbc86cd68517d68e807abad8cc09c0668f73e69287045372" exitCode=0 Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.950784 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chh8j" event={"ID":"ce921030-ec82-420d-a9e7-cd04ee7e055b","Type":"ContainerDied","Data":"fb361fdaea379654dbc86cd68517d68e807abad8cc09c0668f73e69287045372"} Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.959966 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p69tc" event={"ID":"81ac7afd-2261-4af0-9b59-f18c98424c21","Type":"ContainerStarted","Data":"a90292b715a6b65a014ed7eb8deabd761ebf3e153cc055446b537c8b0a0cffc2"} Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.970436 4932 scope.go:117] "RemoveContainer" containerID="d38a216a82b39bb975814a991d5655202fd40044fea2187926f28e38ebb199ce" Feb 18 19:37:10 crc kubenswrapper[4932]: E0218 19:37:10.976660 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d38a216a82b39bb975814a991d5655202fd40044fea2187926f28e38ebb199ce\": container with ID starting with d38a216a82b39bb975814a991d5655202fd40044fea2187926f28e38ebb199ce not found: ID does not exist" containerID="d38a216a82b39bb975814a991d5655202fd40044fea2187926f28e38ebb199ce" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.976740 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d38a216a82b39bb975814a991d5655202fd40044fea2187926f28e38ebb199ce"} err="failed to get container status \"d38a216a82b39bb975814a991d5655202fd40044fea2187926f28e38ebb199ce\": rpc error: code = NotFound desc = could not find container \"d38a216a82b39bb975814a991d5655202fd40044fea2187926f28e38ebb199ce\": container with ID starting with d38a216a82b39bb975814a991d5655202fd40044fea2187926f28e38ebb199ce not found: ID does not exist" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.990782 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-68c79b7788-6k9bw_c823406a-c4f6-4335-be43-312c5336c730/route-controller-manager/0.log" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.990859 4932 generic.go:334] "Generic (PLEG): container finished" podID="c823406a-c4f6-4335-be43-312c5336c730" containerID="157c3f809daff20e2c2c13f1fea073d50407314de8233127cfae11a101b2727d" exitCode=255 Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.990990 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" event={"ID":"c823406a-c4f6-4335-be43-312c5336c730","Type":"ContainerDied","Data":"157c3f809daff20e2c2c13f1fea073d50407314de8233127cfae11a101b2727d"} Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.991031 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" event={"ID":"c823406a-c4f6-4335-be43-312c5336c730","Type":"ContainerDied","Data":"f39ddfc2e987d91c204f3ba288daac538653177cbb76acaa1de063014d9d24bf"} Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.991060 4932 scope.go:117] "RemoveContainer" containerID="157c3f809daff20e2c2c13f1fea073d50407314de8233127cfae11a101b2727d" Feb 18 19:37:10 crc kubenswrapper[4932]: I0218 19:37:10.991088 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw" Feb 18 19:37:11 crc kubenswrapper[4932]: I0218 19:37:11.004665 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p69tc" podStartSLOduration=2.6309662510000003 podStartE2EDuration="45.004646255s" podCreationTimestamp="2026-02-18 19:36:26 +0000 UTC" firstStartedPulling="2026-02-18 19:36:28.004485406 +0000 UTC m=+151.586440261" lastFinishedPulling="2026-02-18 19:37:10.37816543 +0000 UTC m=+193.960120265" observedRunningTime="2026-02-18 19:37:10.999466479 +0000 UTC m=+194.581421344" watchObservedRunningTime="2026-02-18 19:37:11.004646255 +0000 UTC m=+194.586601100" Feb 18 19:37:11 crc kubenswrapper[4932]: I0218 19:37:11.004846 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" event={"ID":"1d73072e-7e9b-4ae7-92ca-5950da33ed6c","Type":"ContainerStarted","Data":"61052ec9d7c58c38600c3eb083a79cedb4677f18cee7a1f55eb74c4fddfc76dd"} Feb 18 19:37:11 crc kubenswrapper[4932]: I0218 19:37:11.004944 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kdjbt" event={"ID":"1d73072e-7e9b-4ae7-92ca-5950da33ed6c","Type":"ContainerStarted","Data":"b26b828b3d627427637f6dba4bc8e7c635d0c8fa26d6e91863152aef240179a8"} Feb 18 19:37:11 crc kubenswrapper[4932]: I0218 19:37:11.008254 4932 generic.go:334] "Generic (PLEG): container finished" podID="dba97173-1fe4-4a77-acd6-ec71b7aea5b3" containerID="54669d850ab4e2c576ace2b30a4fd353020f94b96a65cf27707838b8b12d61bb" exitCode=0 Feb 18 19:37:11 crc kubenswrapper[4932]: I0218 19:37:11.009461 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"dba97173-1fe4-4a77-acd6-ec71b7aea5b3","Type":"ContainerDied","Data":"54669d850ab4e2c576ace2b30a4fd353020f94b96a65cf27707838b8b12d61bb"} Feb 18 19:37:11 crc kubenswrapper[4932]: I0218 19:37:11.020385 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc"] Feb 18 19:37:11 crc kubenswrapper[4932]: I0218 19:37:11.025193 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-b4b65bcb8-ns9lc"] Feb 18 19:37:11 crc kubenswrapper[4932]: I0218 19:37:11.026311 4932 scope.go:117] "RemoveContainer" containerID="157c3f809daff20e2c2c13f1fea073d50407314de8233127cfae11a101b2727d" Feb 18 19:37:11 crc kubenswrapper[4932]: E0218 19:37:11.027219 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"157c3f809daff20e2c2c13f1fea073d50407314de8233127cfae11a101b2727d\": container with ID starting with 157c3f809daff20e2c2c13f1fea073d50407314de8233127cfae11a101b2727d not found: ID does not exist" containerID="157c3f809daff20e2c2c13f1fea073d50407314de8233127cfae11a101b2727d" Feb 18 19:37:11 crc kubenswrapper[4932]: I0218 19:37:11.027257 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"157c3f809daff20e2c2c13f1fea073d50407314de8233127cfae11a101b2727d"} err="failed to get container status \"157c3f809daff20e2c2c13f1fea073d50407314de8233127cfae11a101b2727d\": rpc error: code = NotFound desc = could not find container \"157c3f809daff20e2c2c13f1fea073d50407314de8233127cfae11a101b2727d\": container with ID starting with 157c3f809daff20e2c2c13f1fea073d50407314de8233127cfae11a101b2727d not found: ID does not exist" Feb 18 19:37:11 crc kubenswrapper[4932]: I0218 19:37:11.058307 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-kdjbt" podStartSLOduration=171.058292891 podStartE2EDuration="2m51.058292891s" podCreationTimestamp="2026-02-18 19:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:11.052537612 +0000 UTC m=+194.634492467" watchObservedRunningTime="2026-02-18 19:37:11.058292891 +0000 UTC m=+194.640247736" Feb 18 19:37:11 crc kubenswrapper[4932]: I0218 19:37:11.069128 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw"] Feb 18 19:37:11 crc kubenswrapper[4932]: I0218 19:37:11.072527 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68c79b7788-6k9bw"] Feb 18 19:37:11 crc kubenswrapper[4932]: I0218 19:37:11.080566 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-648d7854bd-2rffd"] Feb 18 19:37:11 crc kubenswrapper[4932]: W0218 19:37:11.081046 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f701e3b_5068_423b_ae72_2097ca900619.slice/crio-ffc89283d0d1b05c0e852a5ff74828280f4f0bd46e1714810111665fecf8f740 WatchSource:0}: Error finding container ffc89283d0d1b05c0e852a5ff74828280f4f0bd46e1714810111665fecf8f740: Status 404 returned error can't find the container with id ffc89283d0d1b05c0e852a5ff74828280f4f0bd46e1714810111665fecf8f740 Feb 18 19:37:11 crc kubenswrapper[4932]: I0218 19:37:11.187620 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="195f4a7f-a008-4ca1-96d4-771758b838b9" path="/var/lib/kubelet/pods/195f4a7f-a008-4ca1-96d4-771758b838b9/volumes" Feb 18 19:37:11 crc kubenswrapper[4932]: I0218 19:37:11.188303 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c823406a-c4f6-4335-be43-312c5336c730" path="/var/lib/kubelet/pods/c823406a-c4f6-4335-be43-312c5336c730/volumes" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.020987 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chh8j" event={"ID":"ce921030-ec82-420d-a9e7-cd04ee7e055b","Type":"ContainerStarted","Data":"e34e27d0659e0d99e6372515305dc5e1613a602751683fd615bb6bd8747d32f2"} Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.027523 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" event={"ID":"3f701e3b-5068-423b-ae72-2097ca900619","Type":"ContainerStarted","Data":"07c51006436dce5c79f6a0ca9587b0474d8d3d2fbf6ac368abfe60f9fc273e20"} Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.027589 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" event={"ID":"3f701e3b-5068-423b-ae72-2097ca900619","Type":"ContainerStarted","Data":"ffc89283d0d1b05c0e852a5ff74828280f4f0bd46e1714810111665fecf8f740"} Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.048342 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-chh8j" podStartSLOduration=3.797922683 podStartE2EDuration="43.048325749s" podCreationTimestamp="2026-02-18 19:36:29 +0000 UTC" firstStartedPulling="2026-02-18 19:36:32.131076919 +0000 UTC m=+155.713031764" lastFinishedPulling="2026-02-18 19:37:11.381479975 +0000 UTC m=+194.963434830" observedRunningTime="2026-02-18 19:37:12.046408356 +0000 UTC m=+195.628363221" watchObservedRunningTime="2026-02-18 19:37:12.048325749 +0000 UTC m=+195.630280594" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.263468 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.286030 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" podStartSLOduration=13.285987577 podStartE2EDuration="13.285987577s" podCreationTimestamp="2026-02-18 19:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:12.076437816 +0000 UTC m=+195.658392671" watchObservedRunningTime="2026-02-18 19:37:12.285987577 +0000 UTC m=+195.867942422" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.340850 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dba97173-1fe4-4a77-acd6-ec71b7aea5b3-kube-api-access\") pod \"dba97173-1fe4-4a77-acd6-ec71b7aea5b3\" (UID: \"dba97173-1fe4-4a77-acd6-ec71b7aea5b3\") " Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.340927 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dba97173-1fe4-4a77-acd6-ec71b7aea5b3-kubelet-dir\") pod \"dba97173-1fe4-4a77-acd6-ec71b7aea5b3\" (UID: \"dba97173-1fe4-4a77-acd6-ec71b7aea5b3\") " Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.341084 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dba97173-1fe4-4a77-acd6-ec71b7aea5b3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "dba97173-1fe4-4a77-acd6-ec71b7aea5b3" (UID: "dba97173-1fe4-4a77-acd6-ec71b7aea5b3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.341452 4932 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dba97173-1fe4-4a77-acd6-ec71b7aea5b3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.348338 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dba97173-1fe4-4a77-acd6-ec71b7aea5b3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "dba97173-1fe4-4a77-acd6-ec71b7aea5b3" (UID: "dba97173-1fe4-4a77-acd6-ec71b7aea5b3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.442781 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dba97173-1fe4-4a77-acd6-ec71b7aea5b3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.634090 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw"] Feb 18 19:37:12 crc kubenswrapper[4932]: E0218 19:37:12.634335 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c823406a-c4f6-4335-be43-312c5336c730" containerName="route-controller-manager" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.634348 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="c823406a-c4f6-4335-be43-312c5336c730" containerName="route-controller-manager" Feb 18 19:37:12 crc kubenswrapper[4932]: E0218 19:37:12.634362 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dba97173-1fe4-4a77-acd6-ec71b7aea5b3" containerName="pruner" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.634368 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="dba97173-1fe4-4a77-acd6-ec71b7aea5b3" containerName="pruner" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.634476 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="dba97173-1fe4-4a77-acd6-ec71b7aea5b3" containerName="pruner" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.634487 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="c823406a-c4f6-4335-be43-312c5336c730" containerName="route-controller-manager" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.634831 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.637178 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.637219 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.637327 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.637396 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.637435 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.637711 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.644572 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-client-ca\") pod \"route-controller-manager-5589b4dbdd-mgnvw\" (UID: \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\") " pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.644613 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-serving-cert\") pod \"route-controller-manager-5589b4dbdd-mgnvw\" (UID: \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\") " pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.644654 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-config\") pod \"route-controller-manager-5589b4dbdd-mgnvw\" (UID: \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\") " pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.644705 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88l4b\" (UniqueName: \"kubernetes.io/projected/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-kube-api-access-88l4b\") pod \"route-controller-manager-5589b4dbdd-mgnvw\" (UID: \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\") " pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.645786 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw"] Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.745087 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-client-ca\") pod \"route-controller-manager-5589b4dbdd-mgnvw\" (UID: \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\") " pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.745142 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-serving-cert\") pod \"route-controller-manager-5589b4dbdd-mgnvw\" (UID: \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\") " pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.745206 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-config\") pod \"route-controller-manager-5589b4dbdd-mgnvw\" (UID: \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\") " pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.745233 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88l4b\" (UniqueName: \"kubernetes.io/projected/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-kube-api-access-88l4b\") pod \"route-controller-manager-5589b4dbdd-mgnvw\" (UID: \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\") " pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.746639 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-config\") pod \"route-controller-manager-5589b4dbdd-mgnvw\" (UID: \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\") " pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.746678 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-client-ca\") pod \"route-controller-manager-5589b4dbdd-mgnvw\" (UID: \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\") " pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.749777 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-serving-cert\") pod \"route-controller-manager-5589b4dbdd-mgnvw\" (UID: \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\") " pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.769950 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88l4b\" (UniqueName: \"kubernetes.io/projected/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-kube-api-access-88l4b\") pod \"route-controller-manager-5589b4dbdd-mgnvw\" (UID: \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\") " pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:12 crc kubenswrapper[4932]: I0218 19:37:12.951569 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.049409 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.051295 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"dba97173-1fe4-4a77-acd6-ec71b7aea5b3","Type":"ContainerDied","Data":"feb4a7d377ea26a35159f7d75865c2832616f9d3cb5a86adfb836dad5ee65129"} Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.051436 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="feb4a7d377ea26a35159f7d75865c2832616f9d3cb5a86adfb836dad5ee65129" Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.052032 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.060432 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.370880 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw"] Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.581929 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.583706 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.586176 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.586472 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.588151 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.660039 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-kube-api-access\") pod \"installer-9-crc\" (UID: \"b38b0e86-4a7b-4436-a0ef-565a61a1eab4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.660144 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b38b0e86-4a7b-4436-a0ef-565a61a1eab4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.660205 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-var-lock\") pod \"installer-9-crc\" (UID: \"b38b0e86-4a7b-4436-a0ef-565a61a1eab4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.761537 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-var-lock\") pod \"installer-9-crc\" (UID: \"b38b0e86-4a7b-4436-a0ef-565a61a1eab4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.761608 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-kube-api-access\") pod \"installer-9-crc\" (UID: \"b38b0e86-4a7b-4436-a0ef-565a61a1eab4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.761662 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b38b0e86-4a7b-4436-a0ef-565a61a1eab4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.761759 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b38b0e86-4a7b-4436-a0ef-565a61a1eab4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.761760 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-var-lock\") pod \"installer-9-crc\" (UID: \"b38b0e86-4a7b-4436-a0ef-565a61a1eab4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.781665 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-kube-api-access\") pod \"installer-9-crc\" (UID: \"b38b0e86-4a7b-4436-a0ef-565a61a1eab4\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:37:13 crc kubenswrapper[4932]: I0218 19:37:13.902098 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:37:14 crc kubenswrapper[4932]: I0218 19:37:14.099605 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xnxl9"] Feb 18 19:37:14 crc kubenswrapper[4932]: I0218 19:37:14.108277 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" event={"ID":"d1eaf5e6-7318-4473-8317-8a38fcca1fdc","Type":"ContainerStarted","Data":"1bde042b0eca7d25e70e8bc8f868a4af5d16d9cfdbec831cd7e71b1619585a03"} Feb 18 19:37:14 crc kubenswrapper[4932]: I0218 19:37:14.108859 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" event={"ID":"d1eaf5e6-7318-4473-8317-8a38fcca1fdc","Type":"ContainerStarted","Data":"ae3a5e90285132f6077bd152728f0c93ddc5e392da325f1bb2715b4f11c105b6"} Feb 18 19:37:14 crc kubenswrapper[4932]: I0218 19:37:14.109491 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:14 crc kubenswrapper[4932]: I0218 19:37:14.155667 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" podStartSLOduration=15.155648527 podStartE2EDuration="15.155648527s" podCreationTimestamp="2026-02-18 19:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:14.153080025 +0000 UTC m=+197.735034870" watchObservedRunningTime="2026-02-18 19:37:14.155648527 +0000 UTC m=+197.737603372" Feb 18 19:37:14 crc kubenswrapper[4932]: I0218 19:37:14.246711 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 19:37:14 crc kubenswrapper[4932]: I0218 19:37:14.366959 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:15 crc kubenswrapper[4932]: I0218 19:37:15.115549 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b38b0e86-4a7b-4436-a0ef-565a61a1eab4","Type":"ContainerStarted","Data":"89f0774e9a169a85e00453d4419c3e930e811396c9527b57c8e29093ef32ec9f"} Feb 18 19:37:15 crc kubenswrapper[4932]: I0218 19:37:15.115622 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b38b0e86-4a7b-4436-a0ef-565a61a1eab4","Type":"ContainerStarted","Data":"47e12ed4376656b94af9a3460a8df57cde49986c200ba6e60e8d0c9fbcd288a4"} Feb 18 19:37:15 crc kubenswrapper[4932]: I0218 19:37:15.134909 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.134888468 podStartE2EDuration="2.134888468s" podCreationTimestamp="2026-02-18 19:37:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:15.130842011 +0000 UTC m=+198.712796856" watchObservedRunningTime="2026-02-18 19:37:15.134888468 +0000 UTC m=+198.716843343" Feb 18 19:37:17 crc kubenswrapper[4932]: I0218 19:37:17.280005 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p69tc" Feb 18 19:37:17 crc kubenswrapper[4932]: I0218 19:37:17.280473 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p69tc" Feb 18 19:37:17 crc kubenswrapper[4932]: I0218 19:37:17.446829 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p69tc" Feb 18 19:37:18 crc kubenswrapper[4932]: I0218 19:37:18.135113 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2xgw" event={"ID":"62bbf001-ce57-471f-ad28-1d892d0d30e9","Type":"ContainerStarted","Data":"5fa3af86ad8e20edc339dfb0d7d75e1dba3410f262c6355782e4c035746708c1"} Feb 18 19:37:18 crc kubenswrapper[4932]: I0218 19:37:18.178907 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p69tc" Feb 18 19:37:18 crc kubenswrapper[4932]: I0218 19:37:18.415148 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p69tc"] Feb 18 19:37:19 crc kubenswrapper[4932]: I0218 19:37:19.143554 4932 generic.go:334] "Generic (PLEG): container finished" podID="62bbf001-ce57-471f-ad28-1d892d0d30e9" containerID="5fa3af86ad8e20edc339dfb0d7d75e1dba3410f262c6355782e4c035746708c1" exitCode=0 Feb 18 19:37:19 crc kubenswrapper[4932]: I0218 19:37:19.143634 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2xgw" event={"ID":"62bbf001-ce57-471f-ad28-1d892d0d30e9","Type":"ContainerDied","Data":"5fa3af86ad8e20edc339dfb0d7d75e1dba3410f262c6355782e4c035746708c1"} Feb 18 19:37:19 crc kubenswrapper[4932]: I0218 19:37:19.916634 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-chh8j" Feb 18 19:37:19 crc kubenswrapper[4932]: I0218 19:37:19.918273 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-chh8j" Feb 18 19:37:19 crc kubenswrapper[4932]: I0218 19:37:19.958409 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-chh8j" Feb 18 19:37:20 crc kubenswrapper[4932]: I0218 19:37:20.149938 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbkr8" event={"ID":"29a4229b-f53b-4cd7-b81b-7fc2dfded045","Type":"ContainerStarted","Data":"cef6e7760ebec1fa06b76322117353e0f739e9e99d661b752ca75afb8975a508"} Feb 18 19:37:20 crc kubenswrapper[4932]: I0218 19:37:20.151666 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2xgw" event={"ID":"62bbf001-ce57-471f-ad28-1d892d0d30e9","Type":"ContainerStarted","Data":"13c758cf33ac2064fd2a2bac98c4ca52868f7188bbf8e3e8b926c0341705af4b"} Feb 18 19:37:20 crc kubenswrapper[4932]: I0218 19:37:20.151981 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p69tc" podUID="81ac7afd-2261-4af0-9b59-f18c98424c21" containerName="registry-server" containerID="cri-o://a90292b715a6b65a014ed7eb8deabd761ebf3e153cc055446b537c8b0a0cffc2" gracePeriod=2 Feb 18 19:37:20 crc kubenswrapper[4932]: I0218 19:37:20.186628 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j2xgw" podStartSLOduration=2.636009801 podStartE2EDuration="54.186611517s" podCreationTimestamp="2026-02-18 19:36:26 +0000 UTC" firstStartedPulling="2026-02-18 19:36:28.001909378 +0000 UTC m=+151.583864233" lastFinishedPulling="2026-02-18 19:37:19.552511104 +0000 UTC m=+203.134465949" observedRunningTime="2026-02-18 19:37:20.185392098 +0000 UTC m=+203.767346943" watchObservedRunningTime="2026-02-18 19:37:20.186611517 +0000 UTC m=+203.768566362" Feb 18 19:37:20 crc kubenswrapper[4932]: I0218 19:37:20.195489 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-chh8j" Feb 18 19:37:20 crc kubenswrapper[4932]: I0218 19:37:20.856946 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p69tc" Feb 18 19:37:20 crc kubenswrapper[4932]: I0218 19:37:20.868659 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81ac7afd-2261-4af0-9b59-f18c98424c21-catalog-content\") pod \"81ac7afd-2261-4af0-9b59-f18c98424c21\" (UID: \"81ac7afd-2261-4af0-9b59-f18c98424c21\") " Feb 18 19:37:20 crc kubenswrapper[4932]: I0218 19:37:20.868790 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81ac7afd-2261-4af0-9b59-f18c98424c21-utilities\") pod \"81ac7afd-2261-4af0-9b59-f18c98424c21\" (UID: \"81ac7afd-2261-4af0-9b59-f18c98424c21\") " Feb 18 19:37:20 crc kubenswrapper[4932]: I0218 19:37:20.868859 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvppw\" (UniqueName: \"kubernetes.io/projected/81ac7afd-2261-4af0-9b59-f18c98424c21-kube-api-access-vvppw\") pod \"81ac7afd-2261-4af0-9b59-f18c98424c21\" (UID: \"81ac7afd-2261-4af0-9b59-f18c98424c21\") " Feb 18 19:37:20 crc kubenswrapper[4932]: I0218 19:37:20.869455 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81ac7afd-2261-4af0-9b59-f18c98424c21-utilities" (OuterVolumeSpecName: "utilities") pod "81ac7afd-2261-4af0-9b59-f18c98424c21" (UID: "81ac7afd-2261-4af0-9b59-f18c98424c21"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:37:20 crc kubenswrapper[4932]: I0218 19:37:20.876001 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81ac7afd-2261-4af0-9b59-f18c98424c21-kube-api-access-vvppw" (OuterVolumeSpecName: "kube-api-access-vvppw") pod "81ac7afd-2261-4af0-9b59-f18c98424c21" (UID: "81ac7afd-2261-4af0-9b59-f18c98424c21"). InnerVolumeSpecName "kube-api-access-vvppw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:20 crc kubenswrapper[4932]: I0218 19:37:20.970114 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81ac7afd-2261-4af0-9b59-f18c98424c21-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:20 crc kubenswrapper[4932]: I0218 19:37:20.970145 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvppw\" (UniqueName: \"kubernetes.io/projected/81ac7afd-2261-4af0-9b59-f18c98424c21-kube-api-access-vvppw\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.104900 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81ac7afd-2261-4af0-9b59-f18c98424c21-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81ac7afd-2261-4af0-9b59-f18c98424c21" (UID: "81ac7afd-2261-4af0-9b59-f18c98424c21"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.159904 4932 generic.go:334] "Generic (PLEG): container finished" podID="81ac7afd-2261-4af0-9b59-f18c98424c21" containerID="a90292b715a6b65a014ed7eb8deabd761ebf3e153cc055446b537c8b0a0cffc2" exitCode=0 Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.159971 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p69tc" event={"ID":"81ac7afd-2261-4af0-9b59-f18c98424c21","Type":"ContainerDied","Data":"a90292b715a6b65a014ed7eb8deabd761ebf3e153cc055446b537c8b0a0cffc2"} Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.159998 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p69tc" event={"ID":"81ac7afd-2261-4af0-9b59-f18c98424c21","Type":"ContainerDied","Data":"c390b6f5bfce7b21488ea351096dff0534a3fb41e4e604cf85b8536016c29379"} Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.160014 4932 scope.go:117] "RemoveContainer" containerID="a90292b715a6b65a014ed7eb8deabd761ebf3e153cc055446b537c8b0a0cffc2" Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.160077 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p69tc" Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.166966 4932 generic.go:334] "Generic (PLEG): container finished" podID="29a4229b-f53b-4cd7-b81b-7fc2dfded045" containerID="cef6e7760ebec1fa06b76322117353e0f739e9e99d661b752ca75afb8975a508" exitCode=0 Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.167075 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbkr8" event={"ID":"29a4229b-f53b-4cd7-b81b-7fc2dfded045","Type":"ContainerDied","Data":"cef6e7760ebec1fa06b76322117353e0f739e9e99d661b752ca75afb8975a508"} Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.170977 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81ac7afd-2261-4af0-9b59-f18c98424c21-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.184294 4932 scope.go:117] "RemoveContainer" containerID="c6517e783f0d471966554e41d3a415905201c9c873529a22d7d5a2be4ae0b7f3" Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.222289 4932 scope.go:117] "RemoveContainer" containerID="1b0d2b2ae55ed4f436ccb38016238e82fe7b466707fcaf883296c2a76ea39547" Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.237018 4932 scope.go:117] "RemoveContainer" containerID="a90292b715a6b65a014ed7eb8deabd761ebf3e153cc055446b537c8b0a0cffc2" Feb 18 19:37:21 crc kubenswrapper[4932]: E0218 19:37:21.242166 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a90292b715a6b65a014ed7eb8deabd761ebf3e153cc055446b537c8b0a0cffc2\": container with ID starting with a90292b715a6b65a014ed7eb8deabd761ebf3e153cc055446b537c8b0a0cffc2 not found: ID does not exist" containerID="a90292b715a6b65a014ed7eb8deabd761ebf3e153cc055446b537c8b0a0cffc2" Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.242227 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a90292b715a6b65a014ed7eb8deabd761ebf3e153cc055446b537c8b0a0cffc2"} err="failed to get container status \"a90292b715a6b65a014ed7eb8deabd761ebf3e153cc055446b537c8b0a0cffc2\": rpc error: code = NotFound desc = could not find container \"a90292b715a6b65a014ed7eb8deabd761ebf3e153cc055446b537c8b0a0cffc2\": container with ID starting with a90292b715a6b65a014ed7eb8deabd761ebf3e153cc055446b537c8b0a0cffc2 not found: ID does not exist" Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.242253 4932 scope.go:117] "RemoveContainer" containerID="c6517e783f0d471966554e41d3a415905201c9c873529a22d7d5a2be4ae0b7f3" Feb 18 19:37:21 crc kubenswrapper[4932]: E0218 19:37:21.243138 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6517e783f0d471966554e41d3a415905201c9c873529a22d7d5a2be4ae0b7f3\": container with ID starting with c6517e783f0d471966554e41d3a415905201c9c873529a22d7d5a2be4ae0b7f3 not found: ID does not exist" containerID="c6517e783f0d471966554e41d3a415905201c9c873529a22d7d5a2be4ae0b7f3" Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.243202 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6517e783f0d471966554e41d3a415905201c9c873529a22d7d5a2be4ae0b7f3"} err="failed to get container status \"c6517e783f0d471966554e41d3a415905201c9c873529a22d7d5a2be4ae0b7f3\": rpc error: code = NotFound desc = could not find container \"c6517e783f0d471966554e41d3a415905201c9c873529a22d7d5a2be4ae0b7f3\": container with ID starting with c6517e783f0d471966554e41d3a415905201c9c873529a22d7d5a2be4ae0b7f3 not found: ID does not exist" Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.243243 4932 scope.go:117] "RemoveContainer" containerID="1b0d2b2ae55ed4f436ccb38016238e82fe7b466707fcaf883296c2a76ea39547" Feb 18 19:37:21 crc kubenswrapper[4932]: E0218 19:37:21.244244 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b0d2b2ae55ed4f436ccb38016238e82fe7b466707fcaf883296c2a76ea39547\": container with ID starting with 1b0d2b2ae55ed4f436ccb38016238e82fe7b466707fcaf883296c2a76ea39547 not found: ID does not exist" containerID="1b0d2b2ae55ed4f436ccb38016238e82fe7b466707fcaf883296c2a76ea39547" Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.244291 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b0d2b2ae55ed4f436ccb38016238e82fe7b466707fcaf883296c2a76ea39547"} err="failed to get container status \"1b0d2b2ae55ed4f436ccb38016238e82fe7b466707fcaf883296c2a76ea39547\": rpc error: code = NotFound desc = could not find container \"1b0d2b2ae55ed4f436ccb38016238e82fe7b466707fcaf883296c2a76ea39547\": container with ID starting with 1b0d2b2ae55ed4f436ccb38016238e82fe7b466707fcaf883296c2a76ea39547 not found: ID does not exist" Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.246927 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p69tc"] Feb 18 19:37:21 crc kubenswrapper[4932]: I0218 19:37:21.250152 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p69tc"] Feb 18 19:37:23 crc kubenswrapper[4932]: I0218 19:37:23.187912 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81ac7afd-2261-4af0-9b59-f18c98424c21" path="/var/lib/kubelet/pods/81ac7afd-2261-4af0-9b59-f18c98424c21/volumes" Feb 18 19:37:26 crc kubenswrapper[4932]: I0218 19:37:26.850764 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j2xgw" Feb 18 19:37:26 crc kubenswrapper[4932]: I0218 19:37:26.851335 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j2xgw" Feb 18 19:37:26 crc kubenswrapper[4932]: I0218 19:37:26.903619 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j2xgw" Feb 18 19:37:27 crc kubenswrapper[4932]: I0218 19:37:27.250052 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j2xgw" Feb 18 19:37:27 crc kubenswrapper[4932]: I0218 19:37:27.607191 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:37:27 crc kubenswrapper[4932]: I0218 19:37:27.607277 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:37:27 crc kubenswrapper[4932]: I0218 19:37:27.607338 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:37:27 crc kubenswrapper[4932]: I0218 19:37:27.608058 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:37:27 crc kubenswrapper[4932]: I0218 19:37:27.608135 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e" gracePeriod=600 Feb 18 19:37:28 crc kubenswrapper[4932]: I0218 19:37:28.217460 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e" exitCode=0 Feb 18 19:37:28 crc kubenswrapper[4932]: I0218 19:37:28.217566 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e"} Feb 18 19:37:29 crc kubenswrapper[4932]: I0218 19:37:29.228842 4932 generic.go:334] "Generic (PLEG): container finished" podID="2483e7fb-5cc5-4715-8eea-fd5cf6b31d75" containerID="0567c7be0494b9bddf70e6043593b0a988556b8c69896e99c16ae794d1c2a2af" exitCode=0 Feb 18 19:37:29 crc kubenswrapper[4932]: I0218 19:37:29.228931 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78d5s" event={"ID":"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75","Type":"ContainerDied","Data":"0567c7be0494b9bddf70e6043593b0a988556b8c69896e99c16ae794d1c2a2af"} Feb 18 19:37:29 crc kubenswrapper[4932]: I0218 19:37:29.241621 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbkr8" event={"ID":"29a4229b-f53b-4cd7-b81b-7fc2dfded045","Type":"ContainerStarted","Data":"3e57b3d6e154a8ebba2b8fca4b741c757cd8cb801009ea948632e42fbb363aea"} Feb 18 19:37:29 crc kubenswrapper[4932]: I0218 19:37:29.250593 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"1f6c0fd0c3107fc39e9f403b60bf7cadd547322feaa279357c61854210904894"} Feb 18 19:37:29 crc kubenswrapper[4932]: I0218 19:37:29.253479 4932 generic.go:334] "Generic (PLEG): container finished" podID="b77a623a-ff2e-45aa-9004-b211b0200a3f" containerID="399844cbfb1eed438dbae81663b568d5834893c25f35e7193be65debdd42cfaa" exitCode=0 Feb 18 19:37:29 crc kubenswrapper[4932]: I0218 19:37:29.253575 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4w2tj" event={"ID":"b77a623a-ff2e-45aa-9004-b211b0200a3f","Type":"ContainerDied","Data":"399844cbfb1eed438dbae81663b568d5834893c25f35e7193be65debdd42cfaa"} Feb 18 19:37:29 crc kubenswrapper[4932]: I0218 19:37:29.257785 4932 generic.go:334] "Generic (PLEG): container finished" podID="cafe1e82-ef19-4345-825e-cc9bf016b353" containerID="615e50229cae23ff110c15e4063527051bcf916c17f8ff5f0d5558ecb2cc2e13" exitCode=0 Feb 18 19:37:29 crc kubenswrapper[4932]: I0218 19:37:29.257895 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qvwc8" event={"ID":"cafe1e82-ef19-4345-825e-cc9bf016b353","Type":"ContainerDied","Data":"615e50229cae23ff110c15e4063527051bcf916c17f8ff5f0d5558ecb2cc2e13"} Feb 18 19:37:29 crc kubenswrapper[4932]: I0218 19:37:29.261535 4932 generic.go:334] "Generic (PLEG): container finished" podID="83fa5ba7-c2d8-4d68-839f-ba2f4cad568a" containerID="e56a526cefe2b351e537d7d8e70d925cdff315c5032d2cac6eeddcf04f2903b0" exitCode=0 Feb 18 19:37:29 crc kubenswrapper[4932]: I0218 19:37:29.261605 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwwjl" event={"ID":"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a","Type":"ContainerDied","Data":"e56a526cefe2b351e537d7d8e70d925cdff315c5032d2cac6eeddcf04f2903b0"} Feb 18 19:37:29 crc kubenswrapper[4932]: I0218 19:37:29.295694 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gbkr8" podStartSLOduration=5.910692654 podStartE2EDuration="1m3.295673028s" podCreationTimestamp="2026-02-18 19:36:26 +0000 UTC" firstStartedPulling="2026-02-18 19:36:28.006755906 +0000 UTC m=+151.588710761" lastFinishedPulling="2026-02-18 19:37:25.39173629 +0000 UTC m=+208.973691135" observedRunningTime="2026-02-18 19:37:29.289128489 +0000 UTC m=+212.871083334" watchObservedRunningTime="2026-02-18 19:37:29.295673028 +0000 UTC m=+212.877627883" Feb 18 19:37:30 crc kubenswrapper[4932]: I0218 19:37:30.271835 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwwjl" event={"ID":"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a","Type":"ContainerStarted","Data":"2143cff5659106fb74f76bfc9911c46057ca8c6fec0701ad0655a4944fb37ce5"} Feb 18 19:37:30 crc kubenswrapper[4932]: I0218 19:37:30.276521 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78d5s" event={"ID":"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75","Type":"ContainerStarted","Data":"d594bc17aa06cd1a38357a66172faa1dc4d10fb9703d8833ccd2c334e112a531"} Feb 18 19:37:30 crc kubenswrapper[4932]: I0218 19:37:30.279832 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4w2tj" event={"ID":"b77a623a-ff2e-45aa-9004-b211b0200a3f","Type":"ContainerStarted","Data":"8629bd2837aebb06f17bda76bfe6b4989212f8b67eec3674f76174649de59a2e"} Feb 18 19:37:30 crc kubenswrapper[4932]: I0218 19:37:30.286011 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qvwc8" event={"ID":"cafe1e82-ef19-4345-825e-cc9bf016b353","Type":"ContainerStarted","Data":"6498dec7f3004ba8f78c5dff3be4a4dafeba91d1de501891218f69f8d9282e26"} Feb 18 19:37:30 crc kubenswrapper[4932]: I0218 19:37:30.293258 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vwwjl" podStartSLOduration=4.58082859 podStartE2EDuration="1m2.293234713s" podCreationTimestamp="2026-02-18 19:36:28 +0000 UTC" firstStartedPulling="2026-02-18 19:36:32.127320435 +0000 UTC m=+155.709275280" lastFinishedPulling="2026-02-18 19:37:29.839726558 +0000 UTC m=+213.421681403" observedRunningTime="2026-02-18 19:37:30.29229964 +0000 UTC m=+213.874254515" watchObservedRunningTime="2026-02-18 19:37:30.293234713 +0000 UTC m=+213.875189558" Feb 18 19:37:30 crc kubenswrapper[4932]: I0218 19:37:30.314776 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-78d5s" podStartSLOduration=3.724147179 podStartE2EDuration="1m1.314750184s" podCreationTimestamp="2026-02-18 19:36:29 +0000 UTC" firstStartedPulling="2026-02-18 19:36:32.135326804 +0000 UTC m=+155.717281649" lastFinishedPulling="2026-02-18 19:37:29.725929789 +0000 UTC m=+213.307884654" observedRunningTime="2026-02-18 19:37:30.314713174 +0000 UTC m=+213.896668049" watchObservedRunningTime="2026-02-18 19:37:30.314750184 +0000 UTC m=+213.896705049" Feb 18 19:37:30 crc kubenswrapper[4932]: I0218 19:37:30.357010 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qvwc8" podStartSLOduration=2.558596839 podStartE2EDuration="1m4.356986508s" podCreationTimestamp="2026-02-18 19:36:26 +0000 UTC" firstStartedPulling="2026-02-18 19:36:27.99972549 +0000 UTC m=+151.581680345" lastFinishedPulling="2026-02-18 19:37:29.798115149 +0000 UTC m=+213.380070014" observedRunningTime="2026-02-18 19:37:30.34304168 +0000 UTC m=+213.924996535" watchObservedRunningTime="2026-02-18 19:37:30.356986508 +0000 UTC m=+213.938941353" Feb 18 19:37:30 crc kubenswrapper[4932]: I0218 19:37:30.368639 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4w2tj" podStartSLOduration=3.741856795 podStartE2EDuration="1m2.36861387s" podCreationTimestamp="2026-02-18 19:36:28 +0000 UTC" firstStartedPulling="2026-02-18 19:36:31.070276813 +0000 UTC m=+154.652231658" lastFinishedPulling="2026-02-18 19:37:29.697033848 +0000 UTC m=+213.278988733" observedRunningTime="2026-02-18 19:37:30.364139142 +0000 UTC m=+213.946094007" watchObservedRunningTime="2026-02-18 19:37:30.36861387 +0000 UTC m=+213.950568735" Feb 18 19:37:36 crc kubenswrapper[4932]: I0218 19:37:36.657406 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qvwc8" Feb 18 19:37:36 crc kubenswrapper[4932]: I0218 19:37:36.658107 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qvwc8" Feb 18 19:37:36 crc kubenswrapper[4932]: I0218 19:37:36.730946 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qvwc8" Feb 18 19:37:37 crc kubenswrapper[4932]: I0218 19:37:37.076411 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gbkr8" Feb 18 19:37:37 crc kubenswrapper[4932]: I0218 19:37:37.076555 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gbkr8" Feb 18 19:37:37 crc kubenswrapper[4932]: I0218 19:37:37.129815 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gbkr8" Feb 18 19:37:37 crc kubenswrapper[4932]: I0218 19:37:37.401776 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qvwc8" Feb 18 19:37:37 crc kubenswrapper[4932]: I0218 19:37:37.412709 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gbkr8" Feb 18 19:37:37 crc kubenswrapper[4932]: I0218 19:37:37.971228 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gbkr8"] Feb 18 19:37:38 crc kubenswrapper[4932]: I0218 19:37:38.854417 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4w2tj" Feb 18 19:37:38 crc kubenswrapper[4932]: I0218 19:37:38.854459 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4w2tj" Feb 18 19:37:38 crc kubenswrapper[4932]: I0218 19:37:38.911151 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4w2tj" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.165349 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" podUID="215a0eae-8c5b-4b0e-86f6-056bc6f696ff" containerName="oauth-openshift" containerID="cri-o://6382a3d82fd4779d69e56bae634baaed056f7a56ccaabda7fcfd83e4fe75fc34" gracePeriod=15 Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.294954 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vwwjl" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.294998 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vwwjl" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.335476 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vwwjl" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.360139 4932 generic.go:334] "Generic (PLEG): container finished" podID="215a0eae-8c5b-4b0e-86f6-056bc6f696ff" containerID="6382a3d82fd4779d69e56bae634baaed056f7a56ccaabda7fcfd83e4fe75fc34" exitCode=0 Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.360678 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" event={"ID":"215a0eae-8c5b-4b0e-86f6-056bc6f696ff","Type":"ContainerDied","Data":"6382a3d82fd4779d69e56bae634baaed056f7a56ccaabda7fcfd83e4fe75fc34"} Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.360807 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gbkr8" podUID="29a4229b-f53b-4cd7-b81b-7fc2dfded045" containerName="registry-server" containerID="cri-o://3e57b3d6e154a8ebba2b8fca4b741c757cd8cb801009ea948632e42fbb363aea" gracePeriod=2 Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.395953 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4w2tj" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.398713 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vwwjl" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.647037 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.658796 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-audit-dir\") pod \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.658876 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-router-certs\") pod \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.658915 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-provider-selection\") pod \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.658966 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-idp-0-file-data\") pod \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.659004 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-trusted-ca-bundle\") pod \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.659034 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-service-ca\") pod \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.659065 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-login\") pod \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.659129 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-audit-policies\") pod \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.659166 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-cliconfig\") pod \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.659225 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "215a0eae-8c5b-4b0e-86f6-056bc6f696ff" (UID: "215a0eae-8c5b-4b0e-86f6-056bc6f696ff"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.659235 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-session\") pod \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.659331 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-ocp-branding-template\") pod \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.659359 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-error\") pod \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.659392 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmwdb\" (UniqueName: \"kubernetes.io/projected/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-kube-api-access-kmwdb\") pod \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.659437 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-serving-cert\") pod \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\" (UID: \"215a0eae-8c5b-4b0e-86f6-056bc6f696ff\") " Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.659870 4932 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.665934 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "215a0eae-8c5b-4b0e-86f6-056bc6f696ff" (UID: "215a0eae-8c5b-4b0e-86f6-056bc6f696ff"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.666051 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-kube-api-access-kmwdb" (OuterVolumeSpecName: "kube-api-access-kmwdb") pod "215a0eae-8c5b-4b0e-86f6-056bc6f696ff" (UID: "215a0eae-8c5b-4b0e-86f6-056bc6f696ff"). InnerVolumeSpecName "kube-api-access-kmwdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.666211 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "215a0eae-8c5b-4b0e-86f6-056bc6f696ff" (UID: "215a0eae-8c5b-4b0e-86f6-056bc6f696ff"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.666622 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "215a0eae-8c5b-4b0e-86f6-056bc6f696ff" (UID: "215a0eae-8c5b-4b0e-86f6-056bc6f696ff"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.666698 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "215a0eae-8c5b-4b0e-86f6-056bc6f696ff" (UID: "215a0eae-8c5b-4b0e-86f6-056bc6f696ff"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.666827 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "215a0eae-8c5b-4b0e-86f6-056bc6f696ff" (UID: "215a0eae-8c5b-4b0e-86f6-056bc6f696ff"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.667509 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "215a0eae-8c5b-4b0e-86f6-056bc6f696ff" (UID: "215a0eae-8c5b-4b0e-86f6-056bc6f696ff"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.669062 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "215a0eae-8c5b-4b0e-86f6-056bc6f696ff" (UID: "215a0eae-8c5b-4b0e-86f6-056bc6f696ff"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.679759 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "215a0eae-8c5b-4b0e-86f6-056bc6f696ff" (UID: "215a0eae-8c5b-4b0e-86f6-056bc6f696ff"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.688232 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-59cd769dfc-kdxhn"] Feb 18 19:37:39 crc kubenswrapper[4932]: E0218 19:37:39.688467 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ac7afd-2261-4af0-9b59-f18c98424c21" containerName="extract-utilities" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.688480 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ac7afd-2261-4af0-9b59-f18c98424c21" containerName="extract-utilities" Feb 18 19:37:39 crc kubenswrapper[4932]: E0218 19:37:39.688492 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="215a0eae-8c5b-4b0e-86f6-056bc6f696ff" containerName="oauth-openshift" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.688501 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="215a0eae-8c5b-4b0e-86f6-056bc6f696ff" containerName="oauth-openshift" Feb 18 19:37:39 crc kubenswrapper[4932]: E0218 19:37:39.688512 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ac7afd-2261-4af0-9b59-f18c98424c21" containerName="extract-content" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.688520 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ac7afd-2261-4af0-9b59-f18c98424c21" containerName="extract-content" Feb 18 19:37:39 crc kubenswrapper[4932]: E0218 19:37:39.688537 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ac7afd-2261-4af0-9b59-f18c98424c21" containerName="registry-server" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.688546 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ac7afd-2261-4af0-9b59-f18c98424c21" containerName="registry-server" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.688674 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="215a0eae-8c5b-4b0e-86f6-056bc6f696ff" containerName="oauth-openshift" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.688691 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ac7afd-2261-4af0-9b59-f18c98424c21" containerName="registry-server" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.689123 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.691536 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "215a0eae-8c5b-4b0e-86f6-056bc6f696ff" (UID: "215a0eae-8c5b-4b0e-86f6-056bc6f696ff"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.692415 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "215a0eae-8c5b-4b0e-86f6-056bc6f696ff" (UID: "215a0eae-8c5b-4b0e-86f6-056bc6f696ff"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.700391 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "215a0eae-8c5b-4b0e-86f6-056bc6f696ff" (UID: "215a0eae-8c5b-4b0e-86f6-056bc6f696ff"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.700789 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "215a0eae-8c5b-4b0e-86f6-056bc6f696ff" (UID: "215a0eae-8c5b-4b0e-86f6-056bc6f696ff"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.703995 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-59cd769dfc-kdxhn"] Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.744244 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-648d7854bd-2rffd"] Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.744477 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" podUID="3f701e3b-5068-423b-ae72-2097ca900619" containerName="controller-manager" containerID="cri-o://07c51006436dce5c79f6a0ca9587b0474d8d3d2fbf6ac368abfe60f9fc273e20" gracePeriod=30 Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.760810 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a467e296-550a-46dd-b346-358df4c6ad1d-audit-dir\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.760865 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-session\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.760891 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.760916 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.760947 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.760974 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-user-template-login\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.760998 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761018 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksscj\" (UniqueName: \"kubernetes.io/projected/a467e296-550a-46dd-b346-358df4c6ad1d-kube-api-access-ksscj\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761046 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-router-certs\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761068 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761090 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761110 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a467e296-550a-46dd-b346-358df4c6ad1d-audit-policies\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761147 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-service-ca\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761198 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-user-template-error\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761252 4932 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761267 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761279 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761291 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761303 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761314 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmwdb\" (UniqueName: \"kubernetes.io/projected/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-kube-api-access-kmwdb\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761327 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761339 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761351 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761362 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761373 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761386 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.761398 4932 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/215a0eae-8c5b-4b0e-86f6-056bc6f696ff-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.822008 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw"] Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.822224 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" podUID="d1eaf5e6-7318-4473-8317-8a38fcca1fdc" containerName="route-controller-manager" containerID="cri-o://1bde042b0eca7d25e70e8bc8f868a4af5d16d9cfdbec831cd7e71b1619585a03" gracePeriod=30 Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.843529 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gbkr8" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862036 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a4229b-f53b-4cd7-b81b-7fc2dfded045-catalog-content\") pod \"29a4229b-f53b-4cd7-b81b-7fc2dfded045\" (UID: \"29a4229b-f53b-4cd7-b81b-7fc2dfded045\") " Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862114 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a4229b-f53b-4cd7-b81b-7fc2dfded045-utilities\") pod \"29a4229b-f53b-4cd7-b81b-7fc2dfded045\" (UID: \"29a4229b-f53b-4cd7-b81b-7fc2dfded045\") " Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862258 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hghw\" (UniqueName: \"kubernetes.io/projected/29a4229b-f53b-4cd7-b81b-7fc2dfded045-kube-api-access-6hghw\") pod \"29a4229b-f53b-4cd7-b81b-7fc2dfded045\" (UID: \"29a4229b-f53b-4cd7-b81b-7fc2dfded045\") " Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862437 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-user-template-error\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862493 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a467e296-550a-46dd-b346-358df4c6ad1d-audit-dir\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862523 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-session\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862548 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862574 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862595 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862618 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-user-template-login\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862636 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862652 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksscj\" (UniqueName: \"kubernetes.io/projected/a467e296-550a-46dd-b346-358df4c6ad1d-kube-api-access-ksscj\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862677 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-router-certs\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862701 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862722 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862745 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a467e296-550a-46dd-b346-358df4c6ad1d-audit-policies\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862785 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-service-ca\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.862943 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29a4229b-f53b-4cd7-b81b-7fc2dfded045-utilities" (OuterVolumeSpecName: "utilities") pod "29a4229b-f53b-4cd7-b81b-7fc2dfded045" (UID: "29a4229b-f53b-4cd7-b81b-7fc2dfded045"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.863509 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-service-ca\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.863551 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.863664 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.864264 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a467e296-550a-46dd-b346-358df4c6ad1d-audit-dir\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.864411 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a467e296-550a-46dd-b346-358df4c6ad1d-audit-policies\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.865342 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29a4229b-f53b-4cd7-b81b-7fc2dfded045-kube-api-access-6hghw" (OuterVolumeSpecName: "kube-api-access-6hghw") pod "29a4229b-f53b-4cd7-b81b-7fc2dfded045" (UID: "29a4229b-f53b-4cd7-b81b-7fc2dfded045"). InnerVolumeSpecName "kube-api-access-6hghw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.866309 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-user-template-error\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.866434 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-user-template-login\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.866729 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.867333 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.867420 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.868013 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-session\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.871572 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-router-certs\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.874801 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a467e296-550a-46dd-b346-358df4c6ad1d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.881203 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksscj\" (UniqueName: \"kubernetes.io/projected/a467e296-550a-46dd-b346-358df4c6ad1d-kube-api-access-ksscj\") pod \"oauth-openshift-59cd769dfc-kdxhn\" (UID: \"a467e296-550a-46dd-b346-358df4c6ad1d\") " pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.921751 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29a4229b-f53b-4cd7-b81b-7fc2dfded045-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29a4229b-f53b-4cd7-b81b-7fc2dfded045" (UID: "29a4229b-f53b-4cd7-b81b-7fc2dfded045"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.964106 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a4229b-f53b-4cd7-b81b-7fc2dfded045-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.964139 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a4229b-f53b-4cd7-b81b-7fc2dfded045-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:39 crc kubenswrapper[4932]: I0218 19:37:39.964149 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hghw\" (UniqueName: \"kubernetes.io/projected/29a4229b-f53b-4cd7-b81b-7fc2dfded045-kube-api-access-6hghw\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.027109 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.266644 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-78d5s" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.266694 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-78d5s" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.308008 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-78d5s" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.367711 4932 generic.go:334] "Generic (PLEG): container finished" podID="d1eaf5e6-7318-4473-8317-8a38fcca1fdc" containerID="1bde042b0eca7d25e70e8bc8f868a4af5d16d9cfdbec831cd7e71b1619585a03" exitCode=0 Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.367836 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" event={"ID":"d1eaf5e6-7318-4473-8317-8a38fcca1fdc","Type":"ContainerDied","Data":"1bde042b0eca7d25e70e8bc8f868a4af5d16d9cfdbec831cd7e71b1619585a03"} Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.369089 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" event={"ID":"215a0eae-8c5b-4b0e-86f6-056bc6f696ff","Type":"ContainerDied","Data":"62838236ab987cac95945631bbd754af35252c7b859d7a4d83e36fd02b26a5f7"} Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.369130 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xnxl9" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.369151 4932 scope.go:117] "RemoveContainer" containerID="6382a3d82fd4779d69e56bae634baaed056f7a56ccaabda7fcfd83e4fe75fc34" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.372732 4932 generic.go:334] "Generic (PLEG): container finished" podID="3f701e3b-5068-423b-ae72-2097ca900619" containerID="07c51006436dce5c79f6a0ca9587b0474d8d3d2fbf6ac368abfe60f9fc273e20" exitCode=0 Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.372786 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" event={"ID":"3f701e3b-5068-423b-ae72-2097ca900619","Type":"ContainerDied","Data":"07c51006436dce5c79f6a0ca9587b0474d8d3d2fbf6ac368abfe60f9fc273e20"} Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.376729 4932 generic.go:334] "Generic (PLEG): container finished" podID="29a4229b-f53b-4cd7-b81b-7fc2dfded045" containerID="3e57b3d6e154a8ebba2b8fca4b741c757cd8cb801009ea948632e42fbb363aea" exitCode=0 Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.376770 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gbkr8" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.376871 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbkr8" event={"ID":"29a4229b-f53b-4cd7-b81b-7fc2dfded045","Type":"ContainerDied","Data":"3e57b3d6e154a8ebba2b8fca4b741c757cd8cb801009ea948632e42fbb363aea"} Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.376903 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gbkr8" event={"ID":"29a4229b-f53b-4cd7-b81b-7fc2dfded045","Type":"ContainerDied","Data":"a2c366de25c0453f7a2db8d06c018b6056eb68e4c159566103c144c6b3b72029"} Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.388205 4932 scope.go:117] "RemoveContainer" containerID="3e57b3d6e154a8ebba2b8fca4b741c757cd8cb801009ea948632e42fbb363aea" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.410626 4932 scope.go:117] "RemoveContainer" containerID="cef6e7760ebec1fa06b76322117353e0f739e9e99d661b752ca75afb8975a508" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.414971 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xnxl9"] Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.421334 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xnxl9"] Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.424746 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-78d5s" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.442290 4932 scope.go:117] "RemoveContainer" containerID="a179399980f04575a54a9f03e1d915317fc692a4024caf7d2c1e735e4fe4a0f3" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.446905 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gbkr8"] Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.449756 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gbkr8"] Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.456213 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-59cd769dfc-kdxhn"] Feb 18 19:37:40 crc kubenswrapper[4932]: W0218 19:37:40.482591 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda467e296_550a_46dd_b346_358df4c6ad1d.slice/crio-29791eced4e361d01f780e5edec4a59afe38a2a75a01024ccf3337cde6ebf796 WatchSource:0}: Error finding container 29791eced4e361d01f780e5edec4a59afe38a2a75a01024ccf3337cde6ebf796: Status 404 returned error can't find the container with id 29791eced4e361d01f780e5edec4a59afe38a2a75a01024ccf3337cde6ebf796 Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.489361 4932 scope.go:117] "RemoveContainer" containerID="3e57b3d6e154a8ebba2b8fca4b741c757cd8cb801009ea948632e42fbb363aea" Feb 18 19:37:40 crc kubenswrapper[4932]: E0218 19:37:40.490113 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e57b3d6e154a8ebba2b8fca4b741c757cd8cb801009ea948632e42fbb363aea\": container with ID starting with 3e57b3d6e154a8ebba2b8fca4b741c757cd8cb801009ea948632e42fbb363aea not found: ID does not exist" containerID="3e57b3d6e154a8ebba2b8fca4b741c757cd8cb801009ea948632e42fbb363aea" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.490147 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e57b3d6e154a8ebba2b8fca4b741c757cd8cb801009ea948632e42fbb363aea"} err="failed to get container status \"3e57b3d6e154a8ebba2b8fca4b741c757cd8cb801009ea948632e42fbb363aea\": rpc error: code = NotFound desc = could not find container \"3e57b3d6e154a8ebba2b8fca4b741c757cd8cb801009ea948632e42fbb363aea\": container with ID starting with 3e57b3d6e154a8ebba2b8fca4b741c757cd8cb801009ea948632e42fbb363aea not found: ID does not exist" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.490188 4932 scope.go:117] "RemoveContainer" containerID="cef6e7760ebec1fa06b76322117353e0f739e9e99d661b752ca75afb8975a508" Feb 18 19:37:40 crc kubenswrapper[4932]: E0218 19:37:40.490499 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cef6e7760ebec1fa06b76322117353e0f739e9e99d661b752ca75afb8975a508\": container with ID starting with cef6e7760ebec1fa06b76322117353e0f739e9e99d661b752ca75afb8975a508 not found: ID does not exist" containerID="cef6e7760ebec1fa06b76322117353e0f739e9e99d661b752ca75afb8975a508" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.490518 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cef6e7760ebec1fa06b76322117353e0f739e9e99d661b752ca75afb8975a508"} err="failed to get container status \"cef6e7760ebec1fa06b76322117353e0f739e9e99d661b752ca75afb8975a508\": rpc error: code = NotFound desc = could not find container \"cef6e7760ebec1fa06b76322117353e0f739e9e99d661b752ca75afb8975a508\": container with ID starting with cef6e7760ebec1fa06b76322117353e0f739e9e99d661b752ca75afb8975a508 not found: ID does not exist" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.490533 4932 scope.go:117] "RemoveContainer" containerID="a179399980f04575a54a9f03e1d915317fc692a4024caf7d2c1e735e4fe4a0f3" Feb 18 19:37:40 crc kubenswrapper[4932]: E0218 19:37:40.491015 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a179399980f04575a54a9f03e1d915317fc692a4024caf7d2c1e735e4fe4a0f3\": container with ID starting with a179399980f04575a54a9f03e1d915317fc692a4024caf7d2c1e735e4fe4a0f3 not found: ID does not exist" containerID="a179399980f04575a54a9f03e1d915317fc692a4024caf7d2c1e735e4fe4a0f3" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.491032 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a179399980f04575a54a9f03e1d915317fc692a4024caf7d2c1e735e4fe4a0f3"} err="failed to get container status \"a179399980f04575a54a9f03e1d915317fc692a4024caf7d2c1e735e4fe4a0f3\": rpc error: code = NotFound desc = could not find container \"a179399980f04575a54a9f03e1d915317fc692a4024caf7d2c1e735e4fe4a0f3\": container with ID starting with a179399980f04575a54a9f03e1d915317fc692a4024caf7d2c1e735e4fe4a0f3 not found: ID does not exist" Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.800756 4932 patch_prober.go:28] interesting pod/controller-manager-648d7854bd-2rffd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Feb 18 19:37:40 crc kubenswrapper[4932]: I0218 19:37:40.800813 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" podUID="3f701e3b-5068-423b-ae72-2097ca900619" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.017150 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.037239 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6"] Feb 18 19:37:41 crc kubenswrapper[4932]: E0218 19:37:41.037426 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a4229b-f53b-4cd7-b81b-7fc2dfded045" containerName="extract-content" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.037437 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a4229b-f53b-4cd7-b81b-7fc2dfded045" containerName="extract-content" Feb 18 19:37:41 crc kubenswrapper[4932]: E0218 19:37:41.037453 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a4229b-f53b-4cd7-b81b-7fc2dfded045" containerName="extract-utilities" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.037460 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a4229b-f53b-4cd7-b81b-7fc2dfded045" containerName="extract-utilities" Feb 18 19:37:41 crc kubenswrapper[4932]: E0218 19:37:41.037469 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1eaf5e6-7318-4473-8317-8a38fcca1fdc" containerName="route-controller-manager" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.037475 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1eaf5e6-7318-4473-8317-8a38fcca1fdc" containerName="route-controller-manager" Feb 18 19:37:41 crc kubenswrapper[4932]: E0218 19:37:41.037485 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a4229b-f53b-4cd7-b81b-7fc2dfded045" containerName="registry-server" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.037492 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a4229b-f53b-4cd7-b81b-7fc2dfded045" containerName="registry-server" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.037578 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1eaf5e6-7318-4473-8317-8a38fcca1fdc" containerName="route-controller-manager" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.037587 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="29a4229b-f53b-4cd7-b81b-7fc2dfded045" containerName="registry-server" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.037947 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.078696 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6"] Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.086777 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-serving-cert\") pod \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\" (UID: \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\") " Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.087214 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-client-ca\") pod \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\" (UID: \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\") " Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.087254 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88l4b\" (UniqueName: \"kubernetes.io/projected/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-kube-api-access-88l4b\") pod \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\" (UID: \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\") " Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.087303 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-config\") pod \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\" (UID: \"d1eaf5e6-7318-4473-8317-8a38fcca1fdc\") " Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.087498 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42b9k\" (UniqueName: \"kubernetes.io/projected/cb823dd3-7026-4c20-8dec-73f24b23d9f5-kube-api-access-42b9k\") pod \"route-controller-manager-877bb88d5-s6wj6\" (UID: \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\") " pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.087553 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb823dd3-7026-4c20-8dec-73f24b23d9f5-serving-cert\") pod \"route-controller-manager-877bb88d5-s6wj6\" (UID: \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\") " pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.087604 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb823dd3-7026-4c20-8dec-73f24b23d9f5-client-ca\") pod \"route-controller-manager-877bb88d5-s6wj6\" (UID: \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\") " pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.087651 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb823dd3-7026-4c20-8dec-73f24b23d9f5-config\") pod \"route-controller-manager-877bb88d5-s6wj6\" (UID: \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\") " pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.089861 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-config" (OuterVolumeSpecName: "config") pod "d1eaf5e6-7318-4473-8317-8a38fcca1fdc" (UID: "d1eaf5e6-7318-4473-8317-8a38fcca1fdc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.090129 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-client-ca" (OuterVolumeSpecName: "client-ca") pod "d1eaf5e6-7318-4473-8317-8a38fcca1fdc" (UID: "d1eaf5e6-7318-4473-8317-8a38fcca1fdc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.092692 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d1eaf5e6-7318-4473-8317-8a38fcca1fdc" (UID: "d1eaf5e6-7318-4473-8317-8a38fcca1fdc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.092790 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-kube-api-access-88l4b" (OuterVolumeSpecName: "kube-api-access-88l4b") pod "d1eaf5e6-7318-4473-8317-8a38fcca1fdc" (UID: "d1eaf5e6-7318-4473-8317-8a38fcca1fdc"). InnerVolumeSpecName "kube-api-access-88l4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.122917 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.188404 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f701e3b-5068-423b-ae72-2097ca900619-serving-cert\") pod \"3f701e3b-5068-423b-ae72-2097ca900619\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.188491 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-config\") pod \"3f701e3b-5068-423b-ae72-2097ca900619\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.188554 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n7r6\" (UniqueName: \"kubernetes.io/projected/3f701e3b-5068-423b-ae72-2097ca900619-kube-api-access-5n7r6\") pod \"3f701e3b-5068-423b-ae72-2097ca900619\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.189338 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-client-ca\") pod \"3f701e3b-5068-423b-ae72-2097ca900619\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.189426 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-proxy-ca-bundles\") pod \"3f701e3b-5068-423b-ae72-2097ca900619\" (UID: \"3f701e3b-5068-423b-ae72-2097ca900619\") " Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.189534 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-config" (OuterVolumeSpecName: "config") pod "3f701e3b-5068-423b-ae72-2097ca900619" (UID: "3f701e3b-5068-423b-ae72-2097ca900619"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.189727 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42b9k\" (UniqueName: \"kubernetes.io/projected/cb823dd3-7026-4c20-8dec-73f24b23d9f5-kube-api-access-42b9k\") pod \"route-controller-manager-877bb88d5-s6wj6\" (UID: \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\") " pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.189812 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb823dd3-7026-4c20-8dec-73f24b23d9f5-serving-cert\") pod \"route-controller-manager-877bb88d5-s6wj6\" (UID: \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\") " pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.189909 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb823dd3-7026-4c20-8dec-73f24b23d9f5-client-ca\") pod \"route-controller-manager-877bb88d5-s6wj6\" (UID: \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\") " pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.189987 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb823dd3-7026-4c20-8dec-73f24b23d9f5-config\") pod \"route-controller-manager-877bb88d5-s6wj6\" (UID: \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\") " pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.190039 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.190054 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.190063 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88l4b\" (UniqueName: \"kubernetes.io/projected/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-kube-api-access-88l4b\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.190073 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.190081 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1eaf5e6-7318-4473-8317-8a38fcca1fdc-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.190159 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-client-ca" (OuterVolumeSpecName: "client-ca") pod "3f701e3b-5068-423b-ae72-2097ca900619" (UID: "3f701e3b-5068-423b-ae72-2097ca900619"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.190730 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3f701e3b-5068-423b-ae72-2097ca900619" (UID: "3f701e3b-5068-423b-ae72-2097ca900619"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.191756 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb823dd3-7026-4c20-8dec-73f24b23d9f5-config\") pod \"route-controller-manager-877bb88d5-s6wj6\" (UID: \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\") " pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.191873 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="215a0eae-8c5b-4b0e-86f6-056bc6f696ff" path="/var/lib/kubelet/pods/215a0eae-8c5b-4b0e-86f6-056bc6f696ff/volumes" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.192497 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29a4229b-f53b-4cd7-b81b-7fc2dfded045" path="/var/lib/kubelet/pods/29a4229b-f53b-4cd7-b81b-7fc2dfded045/volumes" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.193484 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f701e3b-5068-423b-ae72-2097ca900619-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3f701e3b-5068-423b-ae72-2097ca900619" (UID: "3f701e3b-5068-423b-ae72-2097ca900619"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.194245 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb823dd3-7026-4c20-8dec-73f24b23d9f5-serving-cert\") pod \"route-controller-manager-877bb88d5-s6wj6\" (UID: \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\") " pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.194605 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb823dd3-7026-4c20-8dec-73f24b23d9f5-client-ca\") pod \"route-controller-manager-877bb88d5-s6wj6\" (UID: \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\") " pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.199362 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f701e3b-5068-423b-ae72-2097ca900619-kube-api-access-5n7r6" (OuterVolumeSpecName: "kube-api-access-5n7r6") pod "3f701e3b-5068-423b-ae72-2097ca900619" (UID: "3f701e3b-5068-423b-ae72-2097ca900619"). InnerVolumeSpecName "kube-api-access-5n7r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.204075 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42b9k\" (UniqueName: \"kubernetes.io/projected/cb823dd3-7026-4c20-8dec-73f24b23d9f5-kube-api-access-42b9k\") pod \"route-controller-manager-877bb88d5-s6wj6\" (UID: \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\") " pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.291036 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5n7r6\" (UniqueName: \"kubernetes.io/projected/3f701e3b-5068-423b-ae72-2097ca900619-kube-api-access-5n7r6\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.291066 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.291077 4932 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3f701e3b-5068-423b-ae72-2097ca900619-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.291085 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f701e3b-5068-423b-ae72-2097ca900619-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.348936 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.392591 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" event={"ID":"a467e296-550a-46dd-b346-358df4c6ad1d","Type":"ContainerStarted","Data":"29791eced4e361d01f780e5edec4a59afe38a2a75a01024ccf3337cde6ebf796"} Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.394205 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.394242 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-648d7854bd-2rffd" event={"ID":"3f701e3b-5068-423b-ae72-2097ca900619","Type":"ContainerDied","Data":"ffc89283d0d1b05c0e852a5ff74828280f4f0bd46e1714810111665fecf8f740"} Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.394313 4932 scope.go:117] "RemoveContainer" containerID="07c51006436dce5c79f6a0ca9587b0474d8d3d2fbf6ac368abfe60f9fc273e20" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.407133 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" event={"ID":"d1eaf5e6-7318-4473-8317-8a38fcca1fdc","Type":"ContainerDied","Data":"ae3a5e90285132f6077bd152728f0c93ddc5e392da325f1bb2715b4f11c105b6"} Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.407215 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.428262 4932 scope.go:117] "RemoveContainer" containerID="1bde042b0eca7d25e70e8bc8f868a4af5d16d9cfdbec831cd7e71b1619585a03" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.444628 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-648d7854bd-2rffd"] Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.455831 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-648d7854bd-2rffd"] Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.468588 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw"] Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.469134 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5589b4dbdd-mgnvw"] Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.550088 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6"] Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.658105 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg"] Feb 18 19:37:41 crc kubenswrapper[4932]: E0218 19:37:41.658318 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f701e3b-5068-423b-ae72-2097ca900619" containerName="controller-manager" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.658330 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f701e3b-5068-423b-ae72-2097ca900619" containerName="controller-manager" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.658426 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f701e3b-5068-423b-ae72-2097ca900619" containerName="controller-manager" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.658758 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.663868 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.664652 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.664666 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.664732 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.664763 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.665612 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.679308 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg"] Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.680639 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.694732 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-client-ca\") pod \"controller-manager-6fb4cb5544-zwdsg\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.694795 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-proxy-ca-bundles\") pod \"controller-manager-6fb4cb5544-zwdsg\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.694866 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmc2p\" (UniqueName: \"kubernetes.io/projected/a10acd9d-2f5c-41c0-b221-65865fe30829-kube-api-access-vmc2p\") pod \"controller-manager-6fb4cb5544-zwdsg\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.694898 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-config\") pod \"controller-manager-6fb4cb5544-zwdsg\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.694925 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a10acd9d-2f5c-41c0-b221-65865fe30829-serving-cert\") pod \"controller-manager-6fb4cb5544-zwdsg\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.795867 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a10acd9d-2f5c-41c0-b221-65865fe30829-serving-cert\") pod \"controller-manager-6fb4cb5544-zwdsg\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.795938 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-client-ca\") pod \"controller-manager-6fb4cb5544-zwdsg\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.795976 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-proxy-ca-bundles\") pod \"controller-manager-6fb4cb5544-zwdsg\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.796046 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmc2p\" (UniqueName: \"kubernetes.io/projected/a10acd9d-2f5c-41c0-b221-65865fe30829-kube-api-access-vmc2p\") pod \"controller-manager-6fb4cb5544-zwdsg\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.796079 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-config\") pod \"controller-manager-6fb4cb5544-zwdsg\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.798000 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-config\") pod \"controller-manager-6fb4cb5544-zwdsg\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.798129 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-client-ca\") pod \"controller-manager-6fb4cb5544-zwdsg\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.799407 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-proxy-ca-bundles\") pod \"controller-manager-6fb4cb5544-zwdsg\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.804047 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a10acd9d-2f5c-41c0-b221-65865fe30829-serving-cert\") pod \"controller-manager-6fb4cb5544-zwdsg\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.825424 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmc2p\" (UniqueName: \"kubernetes.io/projected/a10acd9d-2f5c-41c0-b221-65865fe30829-kube-api-access-vmc2p\") pod \"controller-manager-6fb4cb5544-zwdsg\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:41 crc kubenswrapper[4932]: I0218 19:37:41.996205 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.368477 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwwjl"] Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.368685 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vwwjl" podUID="83fa5ba7-c2d8-4d68-839f-ba2f4cad568a" containerName="registry-server" containerID="cri-o://2143cff5659106fb74f76bfc9911c46057ca8c6fec0701ad0655a4944fb37ce5" gracePeriod=2 Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.409435 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg"] Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.441872 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" event={"ID":"a467e296-550a-46dd-b346-358df4c6ad1d","Type":"ContainerStarted","Data":"ea3e484f416ba68906069e5b3fb84a68ee488f726d61911dc19a9be43ad02a1a"} Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.442432 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.445638 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" event={"ID":"cb823dd3-7026-4c20-8dec-73f24b23d9f5","Type":"ContainerStarted","Data":"a892711fb47ba6b4bcbbb8ec95473d5a4d1c5058339cd6e0916c9dd0e3c0a2ca"} Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.445672 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" event={"ID":"cb823dd3-7026-4c20-8dec-73f24b23d9f5","Type":"ContainerStarted","Data":"e2a8883038eeab43da38d5bcf9fb3ee3f03931e9147fd7652ed3b803d8e18880"} Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.446390 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.448886 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.467876 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-59cd769dfc-kdxhn" podStartSLOduration=28.467857165 podStartE2EDuration="28.467857165s" podCreationTimestamp="2026-02-18 19:37:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:42.466767028 +0000 UTC m=+226.048721873" watchObservedRunningTime="2026-02-18 19:37:42.467857165 +0000 UTC m=+226.049812010" Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.487779 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" podStartSLOduration=3.487764447 podStartE2EDuration="3.487764447s" podCreationTimestamp="2026-02-18 19:37:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:42.484935919 +0000 UTC m=+226.066890764" watchObservedRunningTime="2026-02-18 19:37:42.487764447 +0000 UTC m=+226.069719292" Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.567343 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.879916 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwwjl" Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.911706 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-utilities\") pod \"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a\" (UID: \"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a\") " Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.911749 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-522zn\" (UniqueName: \"kubernetes.io/projected/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-kube-api-access-522zn\") pod \"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a\" (UID: \"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a\") " Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.911868 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-catalog-content\") pod \"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a\" (UID: \"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a\") " Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.912628 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-utilities" (OuterVolumeSpecName: "utilities") pod "83fa5ba7-c2d8-4d68-839f-ba2f4cad568a" (UID: "83fa5ba7-c2d8-4d68-839f-ba2f4cad568a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.921366 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-kube-api-access-522zn" (OuterVolumeSpecName: "kube-api-access-522zn") pod "83fa5ba7-c2d8-4d68-839f-ba2f4cad568a" (UID: "83fa5ba7-c2d8-4d68-839f-ba2f4cad568a"). InnerVolumeSpecName "kube-api-access-522zn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:42 crc kubenswrapper[4932]: I0218 19:37:42.939427 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83fa5ba7-c2d8-4d68-839f-ba2f4cad568a" (UID: "83fa5ba7-c2d8-4d68-839f-ba2f4cad568a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.012740 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.012776 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.012787 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-522zn\" (UniqueName: \"kubernetes.io/projected/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a-kube-api-access-522zn\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.184965 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f701e3b-5068-423b-ae72-2097ca900619" path="/var/lib/kubelet/pods/3f701e3b-5068-423b-ae72-2097ca900619/volumes" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.185609 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1eaf5e6-7318-4473-8317-8a38fcca1fdc" path="/var/lib/kubelet/pods/d1eaf5e6-7318-4473-8317-8a38fcca1fdc/volumes" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.453056 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" event={"ID":"a10acd9d-2f5c-41c0-b221-65865fe30829","Type":"ContainerStarted","Data":"ab88a41d874ce61f48b43b162e1cf7bb6c2c2fa42ca34ca8edd7d29c53a71c40"} Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.453104 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" event={"ID":"a10acd9d-2f5c-41c0-b221-65865fe30829","Type":"ContainerStarted","Data":"9884fc5b935e7ec29f1fa3ab7fe35eb2cbfe8ccdcca7c00b3c99f77fb62e0b75"} Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.453520 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.455817 4932 generic.go:334] "Generic (PLEG): container finished" podID="83fa5ba7-c2d8-4d68-839f-ba2f4cad568a" containerID="2143cff5659106fb74f76bfc9911c46057ca8c6fec0701ad0655a4944fb37ce5" exitCode=0 Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.455882 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwwjl" event={"ID":"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a","Type":"ContainerDied","Data":"2143cff5659106fb74f76bfc9911c46057ca8c6fec0701ad0655a4944fb37ce5"} Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.455926 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwwjl" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.455959 4932 scope.go:117] "RemoveContainer" containerID="2143cff5659106fb74f76bfc9911c46057ca8c6fec0701ad0655a4944fb37ce5" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.455940 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwwjl" event={"ID":"83fa5ba7-c2d8-4d68-839f-ba2f4cad568a","Type":"ContainerDied","Data":"aa8524bb79cb00bc572889b14100dbb8df53c65222c30b9f755ec3035f0dbea0"} Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.463410 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.483967 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" podStartSLOduration=4.483943569 podStartE2EDuration="4.483943569s" podCreationTimestamp="2026-02-18 19:37:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:43.477575935 +0000 UTC m=+227.059530820" watchObservedRunningTime="2026-02-18 19:37:43.483943569 +0000 UTC m=+227.065898424" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.484779 4932 scope.go:117] "RemoveContainer" containerID="e56a526cefe2b351e537d7d8e70d925cdff315c5032d2cac6eeddcf04f2903b0" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.516014 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwwjl"] Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.516132 4932 scope.go:117] "RemoveContainer" containerID="edc99053853d8154c316239909de23e9353814ed8ddb7cbf8894ddd6152c03bb" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.521441 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwwjl"] Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.544226 4932 scope.go:117] "RemoveContainer" containerID="2143cff5659106fb74f76bfc9911c46057ca8c6fec0701ad0655a4944fb37ce5" Feb 18 19:37:43 crc kubenswrapper[4932]: E0218 19:37:43.544719 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2143cff5659106fb74f76bfc9911c46057ca8c6fec0701ad0655a4944fb37ce5\": container with ID starting with 2143cff5659106fb74f76bfc9911c46057ca8c6fec0701ad0655a4944fb37ce5 not found: ID does not exist" containerID="2143cff5659106fb74f76bfc9911c46057ca8c6fec0701ad0655a4944fb37ce5" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.544774 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2143cff5659106fb74f76bfc9911c46057ca8c6fec0701ad0655a4944fb37ce5"} err="failed to get container status \"2143cff5659106fb74f76bfc9911c46057ca8c6fec0701ad0655a4944fb37ce5\": rpc error: code = NotFound desc = could not find container \"2143cff5659106fb74f76bfc9911c46057ca8c6fec0701ad0655a4944fb37ce5\": container with ID starting with 2143cff5659106fb74f76bfc9911c46057ca8c6fec0701ad0655a4944fb37ce5 not found: ID does not exist" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.544816 4932 scope.go:117] "RemoveContainer" containerID="e56a526cefe2b351e537d7d8e70d925cdff315c5032d2cac6eeddcf04f2903b0" Feb 18 19:37:43 crc kubenswrapper[4932]: E0218 19:37:43.545097 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e56a526cefe2b351e537d7d8e70d925cdff315c5032d2cac6eeddcf04f2903b0\": container with ID starting with e56a526cefe2b351e537d7d8e70d925cdff315c5032d2cac6eeddcf04f2903b0 not found: ID does not exist" containerID="e56a526cefe2b351e537d7d8e70d925cdff315c5032d2cac6eeddcf04f2903b0" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.545142 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e56a526cefe2b351e537d7d8e70d925cdff315c5032d2cac6eeddcf04f2903b0"} err="failed to get container status \"e56a526cefe2b351e537d7d8e70d925cdff315c5032d2cac6eeddcf04f2903b0\": rpc error: code = NotFound desc = could not find container \"e56a526cefe2b351e537d7d8e70d925cdff315c5032d2cac6eeddcf04f2903b0\": container with ID starting with e56a526cefe2b351e537d7d8e70d925cdff315c5032d2cac6eeddcf04f2903b0 not found: ID does not exist" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.545167 4932 scope.go:117] "RemoveContainer" containerID="edc99053853d8154c316239909de23e9353814ed8ddb7cbf8894ddd6152c03bb" Feb 18 19:37:43 crc kubenswrapper[4932]: E0218 19:37:43.545603 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edc99053853d8154c316239909de23e9353814ed8ddb7cbf8894ddd6152c03bb\": container with ID starting with edc99053853d8154c316239909de23e9353814ed8ddb7cbf8894ddd6152c03bb not found: ID does not exist" containerID="edc99053853d8154c316239909de23e9353814ed8ddb7cbf8894ddd6152c03bb" Feb 18 19:37:43 crc kubenswrapper[4932]: I0218 19:37:43.545656 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edc99053853d8154c316239909de23e9353814ed8ddb7cbf8894ddd6152c03bb"} err="failed to get container status \"edc99053853d8154c316239909de23e9353814ed8ddb7cbf8894ddd6152c03bb\": rpc error: code = NotFound desc = could not find container \"edc99053853d8154c316239909de23e9353814ed8ddb7cbf8894ddd6152c03bb\": container with ID starting with edc99053853d8154c316239909de23e9353814ed8ddb7cbf8894ddd6152c03bb not found: ID does not exist" Feb 18 19:37:44 crc kubenswrapper[4932]: I0218 19:37:44.768876 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-78d5s"] Feb 18 19:37:44 crc kubenswrapper[4932]: I0218 19:37:44.769091 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-78d5s" podUID="2483e7fb-5cc5-4715-8eea-fd5cf6b31d75" containerName="registry-server" containerID="cri-o://d594bc17aa06cd1a38357a66172faa1dc4d10fb9703d8833ccd2c334e112a531" gracePeriod=2 Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.188394 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83fa5ba7-c2d8-4d68-839f-ba2f4cad568a" path="/var/lib/kubelet/pods/83fa5ba7-c2d8-4d68-839f-ba2f4cad568a/volumes" Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.201390 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-78d5s" Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.249091 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-utilities\") pod \"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75\" (UID: \"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75\") " Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.249141 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-catalog-content\") pod \"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75\" (UID: \"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75\") " Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.249226 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5ks5\" (UniqueName: \"kubernetes.io/projected/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-kube-api-access-h5ks5\") pod \"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75\" (UID: \"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75\") " Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.250144 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-utilities" (OuterVolumeSpecName: "utilities") pod "2483e7fb-5cc5-4715-8eea-fd5cf6b31d75" (UID: "2483e7fb-5cc5-4715-8eea-fd5cf6b31d75"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.259493 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-kube-api-access-h5ks5" (OuterVolumeSpecName: "kube-api-access-h5ks5") pod "2483e7fb-5cc5-4715-8eea-fd5cf6b31d75" (UID: "2483e7fb-5cc5-4715-8eea-fd5cf6b31d75"). InnerVolumeSpecName "kube-api-access-h5ks5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.350468 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.350517 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5ks5\" (UniqueName: \"kubernetes.io/projected/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-kube-api-access-h5ks5\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.375801 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2483e7fb-5cc5-4715-8eea-fd5cf6b31d75" (UID: "2483e7fb-5cc5-4715-8eea-fd5cf6b31d75"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.452322 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.488411 4932 generic.go:334] "Generic (PLEG): container finished" podID="2483e7fb-5cc5-4715-8eea-fd5cf6b31d75" containerID="d594bc17aa06cd1a38357a66172faa1dc4d10fb9703d8833ccd2c334e112a531" exitCode=0 Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.488479 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-78d5s" Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.488595 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78d5s" event={"ID":"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75","Type":"ContainerDied","Data":"d594bc17aa06cd1a38357a66172faa1dc4d10fb9703d8833ccd2c334e112a531"} Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.488634 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78d5s" event={"ID":"2483e7fb-5cc5-4715-8eea-fd5cf6b31d75","Type":"ContainerDied","Data":"d4b4e12432d81a20c3a5774755df782409b7a4c04cd3667ffe8f283572befe4d"} Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.488725 4932 scope.go:117] "RemoveContainer" containerID="d594bc17aa06cd1a38357a66172faa1dc4d10fb9703d8833ccd2c334e112a531" Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.524690 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-78d5s"] Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.527689 4932 scope.go:117] "RemoveContainer" containerID="0567c7be0494b9bddf70e6043593b0a988556b8c69896e99c16ae794d1c2a2af" Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.528746 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-78d5s"] Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.543076 4932 scope.go:117] "RemoveContainer" containerID="4c7dbb9b6882a64794c308943b5dbc3158b29679534b8c6911e13ecd82366648" Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.567225 4932 scope.go:117] "RemoveContainer" containerID="d594bc17aa06cd1a38357a66172faa1dc4d10fb9703d8833ccd2c334e112a531" Feb 18 19:37:45 crc kubenswrapper[4932]: E0218 19:37:45.567829 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d594bc17aa06cd1a38357a66172faa1dc4d10fb9703d8833ccd2c334e112a531\": container with ID starting with d594bc17aa06cd1a38357a66172faa1dc4d10fb9703d8833ccd2c334e112a531 not found: ID does not exist" containerID="d594bc17aa06cd1a38357a66172faa1dc4d10fb9703d8833ccd2c334e112a531" Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.568577 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d594bc17aa06cd1a38357a66172faa1dc4d10fb9703d8833ccd2c334e112a531"} err="failed to get container status \"d594bc17aa06cd1a38357a66172faa1dc4d10fb9703d8833ccd2c334e112a531\": rpc error: code = NotFound desc = could not find container \"d594bc17aa06cd1a38357a66172faa1dc4d10fb9703d8833ccd2c334e112a531\": container with ID starting with d594bc17aa06cd1a38357a66172faa1dc4d10fb9703d8833ccd2c334e112a531 not found: ID does not exist" Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.568619 4932 scope.go:117] "RemoveContainer" containerID="0567c7be0494b9bddf70e6043593b0a988556b8c69896e99c16ae794d1c2a2af" Feb 18 19:37:45 crc kubenswrapper[4932]: E0218 19:37:45.568974 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0567c7be0494b9bddf70e6043593b0a988556b8c69896e99c16ae794d1c2a2af\": container with ID starting with 0567c7be0494b9bddf70e6043593b0a988556b8c69896e99c16ae794d1c2a2af not found: ID does not exist" containerID="0567c7be0494b9bddf70e6043593b0a988556b8c69896e99c16ae794d1c2a2af" Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.569006 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0567c7be0494b9bddf70e6043593b0a988556b8c69896e99c16ae794d1c2a2af"} err="failed to get container status \"0567c7be0494b9bddf70e6043593b0a988556b8c69896e99c16ae794d1c2a2af\": rpc error: code = NotFound desc = could not find container \"0567c7be0494b9bddf70e6043593b0a988556b8c69896e99c16ae794d1c2a2af\": container with ID starting with 0567c7be0494b9bddf70e6043593b0a988556b8c69896e99c16ae794d1c2a2af not found: ID does not exist" Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.569087 4932 scope.go:117] "RemoveContainer" containerID="4c7dbb9b6882a64794c308943b5dbc3158b29679534b8c6911e13ecd82366648" Feb 18 19:37:45 crc kubenswrapper[4932]: E0218 19:37:45.570467 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c7dbb9b6882a64794c308943b5dbc3158b29679534b8c6911e13ecd82366648\": container with ID starting with 4c7dbb9b6882a64794c308943b5dbc3158b29679534b8c6911e13ecd82366648 not found: ID does not exist" containerID="4c7dbb9b6882a64794c308943b5dbc3158b29679534b8c6911e13ecd82366648" Feb 18 19:37:45 crc kubenswrapper[4932]: I0218 19:37:45.570498 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c7dbb9b6882a64794c308943b5dbc3158b29679534b8c6911e13ecd82366648"} err="failed to get container status \"4c7dbb9b6882a64794c308943b5dbc3158b29679534b8c6911e13ecd82366648\": rpc error: code = NotFound desc = could not find container \"4c7dbb9b6882a64794c308943b5dbc3158b29679534b8c6911e13ecd82366648\": container with ID starting with 4c7dbb9b6882a64794c308943b5dbc3158b29679534b8c6911e13ecd82366648 not found: ID does not exist" Feb 18 19:37:47 crc kubenswrapper[4932]: I0218 19:37:47.187049 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2483e7fb-5cc5-4715-8eea-fd5cf6b31d75" path="/var/lib/kubelet/pods/2483e7fb-5cc5-4715-8eea-fd5cf6b31d75/volumes" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.255082 4932 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 19:37:52 crc kubenswrapper[4932]: E0218 19:37:52.255968 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83fa5ba7-c2d8-4d68-839f-ba2f4cad568a" containerName="extract-utilities" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.255994 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="83fa5ba7-c2d8-4d68-839f-ba2f4cad568a" containerName="extract-utilities" Feb 18 19:37:52 crc kubenswrapper[4932]: E0218 19:37:52.256012 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83fa5ba7-c2d8-4d68-839f-ba2f4cad568a" containerName="extract-content" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.256025 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="83fa5ba7-c2d8-4d68-839f-ba2f4cad568a" containerName="extract-content" Feb 18 19:37:52 crc kubenswrapper[4932]: E0218 19:37:52.256056 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2483e7fb-5cc5-4715-8eea-fd5cf6b31d75" containerName="extract-utilities" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.256068 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="2483e7fb-5cc5-4715-8eea-fd5cf6b31d75" containerName="extract-utilities" Feb 18 19:37:52 crc kubenswrapper[4932]: E0218 19:37:52.256088 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2483e7fb-5cc5-4715-8eea-fd5cf6b31d75" containerName="registry-server" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.256101 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="2483e7fb-5cc5-4715-8eea-fd5cf6b31d75" containerName="registry-server" Feb 18 19:37:52 crc kubenswrapper[4932]: E0218 19:37:52.256121 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2483e7fb-5cc5-4715-8eea-fd5cf6b31d75" containerName="extract-content" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.256134 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="2483e7fb-5cc5-4715-8eea-fd5cf6b31d75" containerName="extract-content" Feb 18 19:37:52 crc kubenswrapper[4932]: E0218 19:37:52.256156 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83fa5ba7-c2d8-4d68-839f-ba2f4cad568a" containerName="registry-server" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.256200 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="83fa5ba7-c2d8-4d68-839f-ba2f4cad568a" containerName="registry-server" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.256387 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="2483e7fb-5cc5-4715-8eea-fd5cf6b31d75" containerName="registry-server" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.256428 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="83fa5ba7-c2d8-4d68-839f-ba2f4cad568a" containerName="registry-server" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.257037 4932 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.257287 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.257682 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7" gracePeriod=15 Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.257750 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0" gracePeriod=15 Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.257797 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18" gracePeriod=15 Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.257715 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5" gracePeriod=15 Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.257891 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601" gracePeriod=15 Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.259120 4932 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 19:37:52 crc kubenswrapper[4932]: E0218 19:37:52.259429 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.259457 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 18 19:37:52 crc kubenswrapper[4932]: E0218 19:37:52.259525 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.259546 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 19:37:52 crc kubenswrapper[4932]: E0218 19:37:52.260826 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.260844 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:37:52 crc kubenswrapper[4932]: E0218 19:37:52.260866 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.260880 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 19:37:52 crc kubenswrapper[4932]: E0218 19:37:52.260898 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.260913 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 19:37:52 crc kubenswrapper[4932]: E0218 19:37:52.260930 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.260942 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.261167 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.261232 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.261251 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.261270 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.261285 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.261308 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 19:37:52 crc kubenswrapper[4932]: E0218 19:37:52.261513 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.261528 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.349986 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.350685 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.350729 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.350752 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.350801 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.350838 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.350959 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.351299 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.452426 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.452508 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.452543 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.452554 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.452605 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.452636 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.452649 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.452668 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.452711 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.452687 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.452789 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.452834 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.452864 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.452983 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.453030 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.453070 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.546198 4932 generic.go:334] "Generic (PLEG): container finished" podID="b38b0e86-4a7b-4436-a0ef-565a61a1eab4" containerID="89f0774e9a169a85e00453d4419c3e930e811396c9527b57c8e29093ef32ec9f" exitCode=0 Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.546318 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b38b0e86-4a7b-4436-a0ef-565a61a1eab4","Type":"ContainerDied","Data":"89f0774e9a169a85e00453d4419c3e930e811396c9527b57c8e29093ef32ec9f"} Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.547420 4932 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.547850 4932 status_manager.go:851] "Failed to get status for pod" podUID="b38b0e86-4a7b-4436-a0ef-565a61a1eab4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.549775 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.551691 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.552737 4932 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5" exitCode=0 Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.552773 4932 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601" exitCode=0 Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.552788 4932 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0" exitCode=0 Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.552802 4932 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18" exitCode=2 Feb 18 19:37:52 crc kubenswrapper[4932]: I0218 19:37:52.552854 4932 scope.go:117] "RemoveContainer" containerID="5438cc3b0b98ff48fd5a0bcddd198a1616b811e554226b45dd04134bfb7dc203" Feb 18 19:37:53 crc kubenswrapper[4932]: I0218 19:37:53.563983 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 19:37:53 crc kubenswrapper[4932]: I0218 19:37:53.969151 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:37:53 crc kubenswrapper[4932]: I0218 19:37:53.970446 4932 status_manager.go:851] "Failed to get status for pod" podUID="b38b0e86-4a7b-4436-a0ef-565a61a1eab4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.080673 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-kube-api-access\") pod \"b38b0e86-4a7b-4436-a0ef-565a61a1eab4\" (UID: \"b38b0e86-4a7b-4436-a0ef-565a61a1eab4\") " Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.081149 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-var-lock\") pod \"b38b0e86-4a7b-4436-a0ef-565a61a1eab4\" (UID: \"b38b0e86-4a7b-4436-a0ef-565a61a1eab4\") " Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.081402 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-kubelet-dir\") pod \"b38b0e86-4a7b-4436-a0ef-565a61a1eab4\" (UID: \"b38b0e86-4a7b-4436-a0ef-565a61a1eab4\") " Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.081468 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-var-lock" (OuterVolumeSpecName: "var-lock") pod "b38b0e86-4a7b-4436-a0ef-565a61a1eab4" (UID: "b38b0e86-4a7b-4436-a0ef-565a61a1eab4"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.081521 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b38b0e86-4a7b-4436-a0ef-565a61a1eab4" (UID: "b38b0e86-4a7b-4436-a0ef-565a61a1eab4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.089858 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b38b0e86-4a7b-4436-a0ef-565a61a1eab4" (UID: "b38b0e86-4a7b-4436-a0ef-565a61a1eab4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.183399 4932 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.183846 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.184032 4932 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b38b0e86-4a7b-4436-a0ef-565a61a1eab4-var-lock\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.571810 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b38b0e86-4a7b-4436-a0ef-565a61a1eab4","Type":"ContainerDied","Data":"47e12ed4376656b94af9a3460a8df57cde49986c200ba6e60e8d0c9fbcd288a4"} Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.572068 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.572084 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47e12ed4376656b94af9a3460a8df57cde49986c200ba6e60e8d0c9fbcd288a4" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.653369 4932 status_manager.go:851] "Failed to get status for pod" podUID="b38b0e86-4a7b-4436-a0ef-565a61a1eab4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.657116 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.658080 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.658541 4932 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.659068 4932 status_manager.go:851] "Failed to get status for pod" podUID="b38b0e86-4a7b-4436-a0ef-565a61a1eab4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.703220 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.703322 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.703357 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.703384 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.703414 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.703581 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.704010 4932 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.704043 4932 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:54 crc kubenswrapper[4932]: I0218 19:37:54.704062 4932 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.187380 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.583807 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.584994 4932 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7" exitCode=0 Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.585099 4932 scope.go:117] "RemoveContainer" containerID="982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.585140 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.586030 4932 status_manager.go:851] "Failed to get status for pod" podUID="b38b0e86-4a7b-4436-a0ef-565a61a1eab4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.587796 4932 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.590905 4932 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.591378 4932 status_manager.go:851] "Failed to get status for pod" podUID="b38b0e86-4a7b-4436-a0ef-565a61a1eab4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.613152 4932 scope.go:117] "RemoveContainer" containerID="376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.642013 4932 scope.go:117] "RemoveContainer" containerID="f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.670115 4932 scope.go:117] "RemoveContainer" containerID="58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.692037 4932 scope.go:117] "RemoveContainer" containerID="4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.719410 4932 scope.go:117] "RemoveContainer" containerID="8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.748318 4932 scope.go:117] "RemoveContainer" containerID="982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5" Feb 18 19:37:55 crc kubenswrapper[4932]: E0218 19:37:55.748904 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\": container with ID starting with 982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5 not found: ID does not exist" containerID="982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.748981 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5"} err="failed to get container status \"982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\": rpc error: code = NotFound desc = could not find container \"982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5\": container with ID starting with 982ba813b3edcbed2610ef223d96b4b58c00ca7f82582e91f32d43d0ad4385b5 not found: ID does not exist" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.749058 4932 scope.go:117] "RemoveContainer" containerID="376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601" Feb 18 19:37:55 crc kubenswrapper[4932]: E0218 19:37:55.749555 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\": container with ID starting with 376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601 not found: ID does not exist" containerID="376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.749640 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601"} err="failed to get container status \"376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\": rpc error: code = NotFound desc = could not find container \"376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601\": container with ID starting with 376df572c63208a2550249a3261f6e100e445a1ffc84b11b7aba63240ecab601 not found: ID does not exist" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.749693 4932 scope.go:117] "RemoveContainer" containerID="f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0" Feb 18 19:37:55 crc kubenswrapper[4932]: E0218 19:37:55.750329 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\": container with ID starting with f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0 not found: ID does not exist" containerID="f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.750408 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0"} err="failed to get container status \"f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\": rpc error: code = NotFound desc = could not find container \"f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0\": container with ID starting with f2709873d7b996b2ca007a746bb85cce864b986c2495c34b33f1974f4620a1a0 not found: ID does not exist" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.750452 4932 scope.go:117] "RemoveContainer" containerID="58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18" Feb 18 19:37:55 crc kubenswrapper[4932]: E0218 19:37:55.750892 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\": container with ID starting with 58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18 not found: ID does not exist" containerID="58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.751063 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18"} err="failed to get container status \"58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\": rpc error: code = NotFound desc = could not find container \"58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18\": container with ID starting with 58ea1b06e257ec513e1178270b4dcba2686d8b09e602edf1fc95b6eceb8ccc18 not found: ID does not exist" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.751124 4932 scope.go:117] "RemoveContainer" containerID="4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7" Feb 18 19:37:55 crc kubenswrapper[4932]: E0218 19:37:55.752396 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\": container with ID starting with 4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7 not found: ID does not exist" containerID="4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.752448 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7"} err="failed to get container status \"4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\": rpc error: code = NotFound desc = could not find container \"4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7\": container with ID starting with 4798b14020656067b83c3703df0ef4b8f2dbc0728f6cef91f5894817b87672e7 not found: ID does not exist" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.752478 4932 scope.go:117] "RemoveContainer" containerID="8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c" Feb 18 19:37:55 crc kubenswrapper[4932]: E0218 19:37:55.752798 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\": container with ID starting with 8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c not found: ID does not exist" containerID="8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c" Feb 18 19:37:55 crc kubenswrapper[4932]: I0218 19:37:55.752838 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c"} err="failed to get container status \"8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\": rpc error: code = NotFound desc = could not find container \"8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c\": container with ID starting with 8bf81e0554b2e450bce5fa10ebd518e99faf13e7f34e3ebd7183d25281c58e5c not found: ID does not exist" Feb 18 19:37:57 crc kubenswrapper[4932]: I0218 19:37:57.184356 4932 status_manager.go:851] "Failed to get status for pod" podUID="b38b0e86-4a7b-4436-a0ef-565a61a1eab4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:57 crc kubenswrapper[4932]: I0218 19:37:57.185277 4932 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:57 crc kubenswrapper[4932]: E0218 19:37:57.303940 4932 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:57 crc kubenswrapper[4932]: E0218 19:37:57.304798 4932 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:57 crc kubenswrapper[4932]: E0218 19:37:57.305366 4932 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:57 crc kubenswrapper[4932]: E0218 19:37:57.305647 4932 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:57 crc kubenswrapper[4932]: E0218 19:37:57.306123 4932 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:57 crc kubenswrapper[4932]: I0218 19:37:57.306218 4932 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 18 19:37:57 crc kubenswrapper[4932]: E0218 19:37:57.306818 4932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="200ms" Feb 18 19:37:57 crc kubenswrapper[4932]: E0218 19:37:57.306951 4932 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.190:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:57 crc kubenswrapper[4932]: I0218 19:37:57.307714 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:57 crc kubenswrapper[4932]: W0218 19:37:57.358783 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-0b3e7e780e57c42bcfa82fdecd5974328d8cc9b994116dc41c1fe40e18053785 WatchSource:0}: Error finding container 0b3e7e780e57c42bcfa82fdecd5974328d8cc9b994116dc41c1fe40e18053785: Status 404 returned error can't find the container with id 0b3e7e780e57c42bcfa82fdecd5974328d8cc9b994116dc41c1fe40e18053785 Feb 18 19:37:57 crc kubenswrapper[4932]: E0218 19:37:57.362030 4932 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.190:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18956e7507d0b720 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 19:37:57.36150608 +0000 UTC m=+240.943460955,LastTimestamp:2026-02-18 19:37:57.36150608 +0000 UTC m=+240.943460955,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 19:37:57 crc kubenswrapper[4932]: E0218 19:37:57.508251 4932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="400ms" Feb 18 19:37:57 crc kubenswrapper[4932]: I0218 19:37:57.604313 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"0b3e7e780e57c42bcfa82fdecd5974328d8cc9b994116dc41c1fe40e18053785"} Feb 18 19:37:57 crc kubenswrapper[4932]: E0218 19:37:57.909630 4932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="800ms" Feb 18 19:37:58 crc kubenswrapper[4932]: I0218 19:37:58.615433 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ccdf9016d19b3aa18d81dc6c0ce8d9980bc77507152a8b0f2f881269efd783e1"} Feb 18 19:37:58 crc kubenswrapper[4932]: I0218 19:37:58.616414 4932 status_manager.go:851] "Failed to get status for pod" podUID="b38b0e86-4a7b-4436-a0ef-565a61a1eab4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:37:58 crc kubenswrapper[4932]: E0218 19:37:58.616441 4932 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.190:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:37:58 crc kubenswrapper[4932]: E0218 19:37:58.711266 4932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="1.6s" Feb 18 19:37:59 crc kubenswrapper[4932]: E0218 19:37:59.621574 4932 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.190:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:38:00 crc kubenswrapper[4932]: E0218 19:38:00.312825 4932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="3.2s" Feb 18 19:38:02 crc kubenswrapper[4932]: E0218 19:38:02.710335 4932 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.190:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18956e7507d0b720 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 19:37:57.36150608 +0000 UTC m=+240.943460955,LastTimestamp:2026-02-18 19:37:57.36150608 +0000 UTC m=+240.943460955,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 19:38:03 crc kubenswrapper[4932]: E0218 19:38:03.514088 4932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="6.4s" Feb 18 19:38:05 crc kubenswrapper[4932]: I0218 19:38:05.663834 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 18 19:38:05 crc kubenswrapper[4932]: I0218 19:38:05.663959 4932 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04" exitCode=1 Feb 18 19:38:05 crc kubenswrapper[4932]: I0218 19:38:05.664035 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04"} Feb 18 19:38:05 crc kubenswrapper[4932]: I0218 19:38:05.665135 4932 status_manager.go:851] "Failed to get status for pod" podUID="b38b0e86-4a7b-4436-a0ef-565a61a1eab4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:38:05 crc kubenswrapper[4932]: I0218 19:38:05.665225 4932 scope.go:117] "RemoveContainer" containerID="fa8305e53efd08773c4d996cfca46b48edcc8922133b1c4e95ad45bb993efe04" Feb 18 19:38:05 crc kubenswrapper[4932]: I0218 19:38:05.665466 4932 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:38:06 crc kubenswrapper[4932]: I0218 19:38:06.672220 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 18 19:38:06 crc kubenswrapper[4932]: I0218 19:38:06.672537 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c663158c78d673cd290435fe02306d2e388eabe920f2c0971d83cb4233a2dacc"} Feb 18 19:38:06 crc kubenswrapper[4932]: I0218 19:38:06.673508 4932 status_manager.go:851] "Failed to get status for pod" podUID="b38b0e86-4a7b-4436-a0ef-565a61a1eab4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:38:06 crc kubenswrapper[4932]: I0218 19:38:06.674161 4932 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:38:07 crc kubenswrapper[4932]: I0218 19:38:07.178354 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:07 crc kubenswrapper[4932]: I0218 19:38:07.182975 4932 status_manager.go:851] "Failed to get status for pod" podUID="b38b0e86-4a7b-4436-a0ef-565a61a1eab4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:38:07 crc kubenswrapper[4932]: I0218 19:38:07.184146 4932 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:38:07 crc kubenswrapper[4932]: I0218 19:38:07.184892 4932 status_manager.go:851] "Failed to get status for pod" podUID="b38b0e86-4a7b-4436-a0ef-565a61a1eab4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:38:07 crc kubenswrapper[4932]: I0218 19:38:07.185589 4932 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:38:07 crc kubenswrapper[4932]: I0218 19:38:07.198671 4932 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="34f6a85c-e66d-4dd7-a145-95674593cba0" Feb 18 19:38:07 crc kubenswrapper[4932]: I0218 19:38:07.198885 4932 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="34f6a85c-e66d-4dd7-a145-95674593cba0" Feb 18 19:38:07 crc kubenswrapper[4932]: E0218 19:38:07.199695 4932 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:07 crc kubenswrapper[4932]: I0218 19:38:07.200426 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:07 crc kubenswrapper[4932]: W0218 19:38:07.215710 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-9952aa2dd3a7067664fc6a731e493a8fc388bf1f077cb33e1512468a4acbc63d WatchSource:0}: Error finding container 9952aa2dd3a7067664fc6a731e493a8fc388bf1f077cb33e1512468a4acbc63d: Status 404 returned error can't find the container with id 9952aa2dd3a7067664fc6a731e493a8fc388bf1f077cb33e1512468a4acbc63d Feb 18 19:38:07 crc kubenswrapper[4932]: E0218 19:38:07.264357 4932 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.190:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" volumeName="registry-storage" Feb 18 19:38:07 crc kubenswrapper[4932]: I0218 19:38:07.309998 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:38:07 crc kubenswrapper[4932]: I0218 19:38:07.681751 4932 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="605e4403cffb6c05afad1cfa84e897f679145191f00dfca26201582912b754c1" exitCode=0 Feb 18 19:38:07 crc kubenswrapper[4932]: I0218 19:38:07.681884 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"605e4403cffb6c05afad1cfa84e897f679145191f00dfca26201582912b754c1"} Feb 18 19:38:07 crc kubenswrapper[4932]: I0218 19:38:07.682296 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9952aa2dd3a7067664fc6a731e493a8fc388bf1f077cb33e1512468a4acbc63d"} Feb 18 19:38:07 crc kubenswrapper[4932]: I0218 19:38:07.682898 4932 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="34f6a85c-e66d-4dd7-a145-95674593cba0" Feb 18 19:38:07 crc kubenswrapper[4932]: I0218 19:38:07.682921 4932 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="34f6a85c-e66d-4dd7-a145-95674593cba0" Feb 18 19:38:07 crc kubenswrapper[4932]: E0218 19:38:07.683418 4932 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:07 crc kubenswrapper[4932]: I0218 19:38:07.683806 4932 status_manager.go:851] "Failed to get status for pod" podUID="b38b0e86-4a7b-4436-a0ef-565a61a1eab4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:38:07 crc kubenswrapper[4932]: I0218 19:38:07.684570 4932 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Feb 18 19:38:08 crc kubenswrapper[4932]: I0218 19:38:08.688367 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a0ff61fcc5eea9d2b70ff6fa451420cd3c979ccb6b28474c592e07fe4b130d88"} Feb 18 19:38:08 crc kubenswrapper[4932]: I0218 19:38:08.688697 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f81ee2b20871ddeb6bd83602f9a8de8c9b70930668c50d3d1c77c00863cb4981"} Feb 18 19:38:09 crc kubenswrapper[4932]: I0218 19:38:09.703007 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d9450c3ad888f774bc49789dbc4275f929db36ed240c5858f77bb4305626022d"} Feb 18 19:38:09 crc kubenswrapper[4932]: I0218 19:38:09.703432 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"58049a2acaa78d458cd3a81eae7124d4f804f1b0475cc60e47542dc023ffa61a"} Feb 18 19:38:09 crc kubenswrapper[4932]: I0218 19:38:09.703441 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"73f5d1cc935e097385509033bc5b8ec515214e4b7af0c8bf77c780ad703090dd"} Feb 18 19:38:09 crc kubenswrapper[4932]: I0218 19:38:09.703454 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:09 crc kubenswrapper[4932]: I0218 19:38:09.703548 4932 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="34f6a85c-e66d-4dd7-a145-95674593cba0" Feb 18 19:38:09 crc kubenswrapper[4932]: I0218 19:38:09.703579 4932 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="34f6a85c-e66d-4dd7-a145-95674593cba0" Feb 18 19:38:10 crc kubenswrapper[4932]: I0218 19:38:10.655361 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:38:10 crc kubenswrapper[4932]: I0218 19:38:10.660086 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:38:12 crc kubenswrapper[4932]: I0218 19:38:12.201386 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:12 crc kubenswrapper[4932]: I0218 19:38:12.201770 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:12 crc kubenswrapper[4932]: I0218 19:38:12.210005 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:14 crc kubenswrapper[4932]: I0218 19:38:14.717486 4932 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:15 crc kubenswrapper[4932]: I0218 19:38:15.744491 4932 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="34f6a85c-e66d-4dd7-a145-95674593cba0" Feb 18 19:38:15 crc kubenswrapper[4932]: I0218 19:38:15.745012 4932 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="34f6a85c-e66d-4dd7-a145-95674593cba0" Feb 18 19:38:15 crc kubenswrapper[4932]: I0218 19:38:15.749234 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:16 crc kubenswrapper[4932]: I0218 19:38:16.749811 4932 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="34f6a85c-e66d-4dd7-a145-95674593cba0" Feb 18 19:38:16 crc kubenswrapper[4932]: I0218 19:38:16.749848 4932 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="34f6a85c-e66d-4dd7-a145-95674593cba0" Feb 18 19:38:17 crc kubenswrapper[4932]: I0218 19:38:17.202709 4932 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="c172ac02-824c-482f-b659-1338ee76566a" Feb 18 19:38:17 crc kubenswrapper[4932]: I0218 19:38:17.315554 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:38:23 crc kubenswrapper[4932]: I0218 19:38:23.855763 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 18 19:38:23 crc kubenswrapper[4932]: I0218 19:38:23.939759 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 18 19:38:24 crc kubenswrapper[4932]: I0218 19:38:24.314961 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 18 19:38:24 crc kubenswrapper[4932]: I0218 19:38:24.955830 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 18 19:38:25 crc kubenswrapper[4932]: I0218 19:38:25.192161 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 18 19:38:25 crc kubenswrapper[4932]: I0218 19:38:25.214201 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 19:38:25 crc kubenswrapper[4932]: I0218 19:38:25.215419 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 18 19:38:25 crc kubenswrapper[4932]: I0218 19:38:25.270979 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 19:38:25 crc kubenswrapper[4932]: I0218 19:38:25.554362 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 18 19:38:25 crc kubenswrapper[4932]: I0218 19:38:25.909494 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 18 19:38:25 crc kubenswrapper[4932]: I0218 19:38:25.962431 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 18 19:38:25 crc kubenswrapper[4932]: I0218 19:38:25.992358 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 18 19:38:26 crc kubenswrapper[4932]: I0218 19:38:26.133079 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 18 19:38:26 crc kubenswrapper[4932]: I0218 19:38:26.153408 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 18 19:38:26 crc kubenswrapper[4932]: I0218 19:38:26.282251 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 18 19:38:26 crc kubenswrapper[4932]: I0218 19:38:26.444698 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 18 19:38:26 crc kubenswrapper[4932]: I0218 19:38:26.461228 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 18 19:38:26 crc kubenswrapper[4932]: I0218 19:38:26.503653 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 18 19:38:26 crc kubenswrapper[4932]: I0218 19:38:26.527154 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 18 19:38:26 crc kubenswrapper[4932]: I0218 19:38:26.530767 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 18 19:38:26 crc kubenswrapper[4932]: I0218 19:38:26.693026 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 18 19:38:26 crc kubenswrapper[4932]: I0218 19:38:26.939681 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 18 19:38:27 crc kubenswrapper[4932]: I0218 19:38:27.084743 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 18 19:38:27 crc kubenswrapper[4932]: I0218 19:38:27.098698 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 19:38:27 crc kubenswrapper[4932]: I0218 19:38:27.116884 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 18 19:38:27 crc kubenswrapper[4932]: I0218 19:38:27.233086 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 19:38:27 crc kubenswrapper[4932]: I0218 19:38:27.312539 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 18 19:38:27 crc kubenswrapper[4932]: I0218 19:38:27.385146 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 18 19:38:27 crc kubenswrapper[4932]: I0218 19:38:27.466589 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 18 19:38:27 crc kubenswrapper[4932]: I0218 19:38:27.638271 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 18 19:38:27 crc kubenswrapper[4932]: I0218 19:38:27.814671 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 18 19:38:27 crc kubenswrapper[4932]: I0218 19:38:27.851909 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 18 19:38:27 crc kubenswrapper[4932]: I0218 19:38:27.911612 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 18 19:38:27 crc kubenswrapper[4932]: I0218 19:38:27.934072 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 18 19:38:28 crc kubenswrapper[4932]: I0218 19:38:28.115835 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 18 19:38:28 crc kubenswrapper[4932]: I0218 19:38:28.166790 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 18 19:38:28 crc kubenswrapper[4932]: I0218 19:38:28.176629 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 18 19:38:28 crc kubenswrapper[4932]: I0218 19:38:28.486065 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 18 19:38:28 crc kubenswrapper[4932]: I0218 19:38:28.494069 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 18 19:38:28 crc kubenswrapper[4932]: I0218 19:38:28.511853 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 18 19:38:28 crc kubenswrapper[4932]: I0218 19:38:28.626756 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 18 19:38:28 crc kubenswrapper[4932]: I0218 19:38:28.654041 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 18 19:38:28 crc kubenswrapper[4932]: I0218 19:38:28.677685 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 18 19:38:28 crc kubenswrapper[4932]: I0218 19:38:28.735652 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 18 19:38:28 crc kubenswrapper[4932]: I0218 19:38:28.798346 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 18 19:38:28 crc kubenswrapper[4932]: I0218 19:38:28.809563 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 18 19:38:28 crc kubenswrapper[4932]: I0218 19:38:28.999283 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.043451 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.064149 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.117519 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.169065 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.213534 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.316360 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.388228 4932 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.414453 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.451271 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.508799 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.516377 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.520112 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.561955 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.610999 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.647527 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.659491 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.680285 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.703922 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.818838 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 18 19:38:29 crc kubenswrapper[4932]: I0218 19:38:29.994474 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 18 19:38:30 crc kubenswrapper[4932]: I0218 19:38:30.193138 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 18 19:38:30 crc kubenswrapper[4932]: I0218 19:38:30.236085 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 18 19:38:30 crc kubenswrapper[4932]: I0218 19:38:30.295212 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 18 19:38:30 crc kubenswrapper[4932]: I0218 19:38:30.314144 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 18 19:38:30 crc kubenswrapper[4932]: I0218 19:38:30.393104 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 18 19:38:30 crc kubenswrapper[4932]: I0218 19:38:30.590705 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 18 19:38:30 crc kubenswrapper[4932]: I0218 19:38:30.602285 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 18 19:38:30 crc kubenswrapper[4932]: I0218 19:38:30.633467 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 18 19:38:30 crc kubenswrapper[4932]: I0218 19:38:30.667216 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 18 19:38:30 crc kubenswrapper[4932]: I0218 19:38:30.707133 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 18 19:38:30 crc kubenswrapper[4932]: I0218 19:38:30.712618 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 18 19:38:30 crc kubenswrapper[4932]: I0218 19:38:30.741437 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 18 19:38:30 crc kubenswrapper[4932]: I0218 19:38:30.749648 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 18 19:38:30 crc kubenswrapper[4932]: I0218 19:38:30.758808 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 18 19:38:30 crc kubenswrapper[4932]: I0218 19:38:30.819709 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 18 19:38:30 crc kubenswrapper[4932]: I0218 19:38:30.923170 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 18 19:38:31 crc kubenswrapper[4932]: I0218 19:38:31.022976 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 18 19:38:31 crc kubenswrapper[4932]: I0218 19:38:31.153625 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 18 19:38:31 crc kubenswrapper[4932]: I0218 19:38:31.154495 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 18 19:38:31 crc kubenswrapper[4932]: I0218 19:38:31.218563 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 18 19:38:31 crc kubenswrapper[4932]: I0218 19:38:31.283614 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 18 19:38:31 crc kubenswrapper[4932]: I0218 19:38:31.377278 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 19:38:31 crc kubenswrapper[4932]: I0218 19:38:31.411033 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 18 19:38:31 crc kubenswrapper[4932]: I0218 19:38:31.438537 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 18 19:38:31 crc kubenswrapper[4932]: I0218 19:38:31.451744 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 18 19:38:31 crc kubenswrapper[4932]: I0218 19:38:31.526766 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 18 19:38:31 crc kubenswrapper[4932]: I0218 19:38:31.529077 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 18 19:38:31 crc kubenswrapper[4932]: I0218 19:38:31.551155 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 18 19:38:31 crc kubenswrapper[4932]: I0218 19:38:31.556967 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 18 19:38:31 crc kubenswrapper[4932]: I0218 19:38:31.575608 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 18 19:38:31 crc kubenswrapper[4932]: I0218 19:38:31.809136 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 18 19:38:31 crc kubenswrapper[4932]: I0218 19:38:31.877936 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 18 19:38:31 crc kubenswrapper[4932]: I0218 19:38:31.904735 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.130125 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.153142 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.210274 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.276737 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.304679 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.391412 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.412073 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.461409 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.489802 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.532984 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.560427 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.562714 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.584587 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.594794 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.632075 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.638464 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.661890 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.760444 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.780246 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.891750 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 18 19:38:32 crc kubenswrapper[4932]: I0218 19:38:32.935937 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.057931 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.095026 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.118266 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.137640 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.151368 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.190921 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.214207 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.250061 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.349207 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.537249 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.593772 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.596709 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.659042 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.675271 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.692531 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.693302 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.814251 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.815755 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.915629 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 18 19:38:33 crc kubenswrapper[4932]: I0218 19:38:33.980535 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.006758 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.047595 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.056429 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.097533 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.139344 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.194288 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.199405 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.204122 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.219313 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.345381 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.413567 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.417491 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.425532 4932 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.430378 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.430461 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.430767 4932 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="34f6a85c-e66d-4dd7-a145-95674593cba0" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.430794 4932 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="34f6a85c-e66d-4dd7-a145-95674593cba0" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.434481 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.449943 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=20.44991689 podStartE2EDuration="20.44991689s" podCreationTimestamp="2026-02-18 19:38:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:34.445838151 +0000 UTC m=+278.027792996" watchObservedRunningTime="2026-02-18 19:38:34.44991689 +0000 UTC m=+278.031871755" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.456734 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.459557 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.519187 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.531429 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.548495 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.623845 4932 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.634715 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.754348 4932 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.822146 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.901598 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 18 19:38:34 crc kubenswrapper[4932]: I0218 19:38:34.933212 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.092891 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.121877 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.175256 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.191502 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.197725 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.200156 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.211835 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.215377 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.300553 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.306313 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.416645 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.433284 4932 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.580202 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.784219 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.854022 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.886960 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.888001 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.926079 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 18 19:38:35 crc kubenswrapper[4932]: I0218 19:38:35.932908 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.002952 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.006374 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.034564 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.047854 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.103076 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.231670 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.335367 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.340101 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.353014 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.413699 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.432829 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.445388 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.469762 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.512215 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.557513 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.608099 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.742616 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.805371 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.845311 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.852018 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.900702 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.918563 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 18 19:38:36 crc kubenswrapper[4932]: I0218 19:38:36.933244 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 18 19:38:37 crc kubenswrapper[4932]: I0218 19:38:37.040638 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 18 19:38:37 crc kubenswrapper[4932]: I0218 19:38:37.157966 4932 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 19:38:37 crc kubenswrapper[4932]: I0218 19:38:37.158339 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://ccdf9016d19b3aa18d81dc6c0ce8d9980bc77507152a8b0f2f881269efd783e1" gracePeriod=5 Feb 18 19:38:37 crc kubenswrapper[4932]: I0218 19:38:37.189339 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 18 19:38:37 crc kubenswrapper[4932]: I0218 19:38:37.192840 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 18 19:38:37 crc kubenswrapper[4932]: I0218 19:38:37.243615 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 18 19:38:37 crc kubenswrapper[4932]: I0218 19:38:37.290691 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 18 19:38:37 crc kubenswrapper[4932]: I0218 19:38:37.398576 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 18 19:38:37 crc kubenswrapper[4932]: I0218 19:38:37.400767 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 18 19:38:37 crc kubenswrapper[4932]: I0218 19:38:37.402422 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 18 19:38:37 crc kubenswrapper[4932]: I0218 19:38:37.595722 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 18 19:38:37 crc kubenswrapper[4932]: I0218 19:38:37.809626 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 18 19:38:37 crc kubenswrapper[4932]: I0218 19:38:37.886944 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 18 19:38:37 crc kubenswrapper[4932]: I0218 19:38:37.893366 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 18 19:38:38 crc kubenswrapper[4932]: I0218 19:38:38.097860 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 18 19:38:38 crc kubenswrapper[4932]: I0218 19:38:38.116030 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 19:38:38 crc kubenswrapper[4932]: I0218 19:38:38.238231 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 18 19:38:38 crc kubenswrapper[4932]: I0218 19:38:38.365527 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 18 19:38:38 crc kubenswrapper[4932]: I0218 19:38:38.380714 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 18 19:38:38 crc kubenswrapper[4932]: I0218 19:38:38.578545 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 18 19:38:38 crc kubenswrapper[4932]: I0218 19:38:38.585150 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 18 19:38:38 crc kubenswrapper[4932]: I0218 19:38:38.595569 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 18 19:38:38 crc kubenswrapper[4932]: I0218 19:38:38.736109 4932 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 18 19:38:38 crc kubenswrapper[4932]: I0218 19:38:38.768816 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 19:38:38 crc kubenswrapper[4932]: I0218 19:38:38.837416 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 18 19:38:38 crc kubenswrapper[4932]: I0218 19:38:38.987259 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.052363 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.161242 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.184354 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.388480 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.465628 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.484677 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.525618 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.551850 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.567429 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.642596 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.755115 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6"] Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.755927 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" podUID="cb823dd3-7026-4c20-8dec-73f24b23d9f5" containerName="route-controller-manager" containerID="cri-o://a892711fb47ba6b4bcbbb8ec95473d5a4d1c5058339cd6e0916c9dd0e3c0a2ca" gracePeriod=30 Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.760489 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg"] Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.760760 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" podUID="a10acd9d-2f5c-41c0-b221-65865fe30829" containerName="controller-manager" containerID="cri-o://ab88a41d874ce61f48b43b162e1cf7bb6c2c2fa42ca34ca8edd7d29c53a71c40" gracePeriod=30 Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.914707 4932 generic.go:334] "Generic (PLEG): container finished" podID="a10acd9d-2f5c-41c0-b221-65865fe30829" containerID="ab88a41d874ce61f48b43b162e1cf7bb6c2c2fa42ca34ca8edd7d29c53a71c40" exitCode=0 Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.914761 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" event={"ID":"a10acd9d-2f5c-41c0-b221-65865fe30829","Type":"ContainerDied","Data":"ab88a41d874ce61f48b43b162e1cf7bb6c2c2fa42ca34ca8edd7d29c53a71c40"} Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.916082 4932 generic.go:334] "Generic (PLEG): container finished" podID="cb823dd3-7026-4c20-8dec-73f24b23d9f5" containerID="a892711fb47ba6b4bcbbb8ec95473d5a4d1c5058339cd6e0916c9dd0e3c0a2ca" exitCode=0 Feb 18 19:38:39 crc kubenswrapper[4932]: I0218 19:38:39.916193 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" event={"ID":"cb823dd3-7026-4c20-8dec-73f24b23d9f5","Type":"ContainerDied","Data":"a892711fb47ba6b4bcbbb8ec95473d5a4d1c5058339cd6e0916c9dd0e3c0a2ca"} Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.080828 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.184029 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.235021 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.240957 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.248217 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb823dd3-7026-4c20-8dec-73f24b23d9f5-config\") pod \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\" (UID: \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\") " Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.248498 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb823dd3-7026-4c20-8dec-73f24b23d9f5-client-ca\") pod \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\" (UID: \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\") " Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.248657 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42b9k\" (UniqueName: \"kubernetes.io/projected/cb823dd3-7026-4c20-8dec-73f24b23d9f5-kube-api-access-42b9k\") pod \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\" (UID: \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\") " Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.248768 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb823dd3-7026-4c20-8dec-73f24b23d9f5-serving-cert\") pod \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\" (UID: \"cb823dd3-7026-4c20-8dec-73f24b23d9f5\") " Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.249103 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb823dd3-7026-4c20-8dec-73f24b23d9f5-client-ca" (OuterVolumeSpecName: "client-ca") pod "cb823dd3-7026-4c20-8dec-73f24b23d9f5" (UID: "cb823dd3-7026-4c20-8dec-73f24b23d9f5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.249146 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb823dd3-7026-4c20-8dec-73f24b23d9f5-config" (OuterVolumeSpecName: "config") pod "cb823dd3-7026-4c20-8dec-73f24b23d9f5" (UID: "cb823dd3-7026-4c20-8dec-73f24b23d9f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.253856 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb823dd3-7026-4c20-8dec-73f24b23d9f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cb823dd3-7026-4c20-8dec-73f24b23d9f5" (UID: "cb823dd3-7026-4c20-8dec-73f24b23d9f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.260341 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb823dd3-7026-4c20-8dec-73f24b23d9f5-kube-api-access-42b9k" (OuterVolumeSpecName: "kube-api-access-42b9k") pod "cb823dd3-7026-4c20-8dec-73f24b23d9f5" (UID: "cb823dd3-7026-4c20-8dec-73f24b23d9f5"). InnerVolumeSpecName "kube-api-access-42b9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.288139 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.293431 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.325412 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.349326 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-config\") pod \"a10acd9d-2f5c-41c0-b221-65865fe30829\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.349378 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-proxy-ca-bundles\") pod \"a10acd9d-2f5c-41c0-b221-65865fe30829\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.349410 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a10acd9d-2f5c-41c0-b221-65865fe30829-serving-cert\") pod \"a10acd9d-2f5c-41c0-b221-65865fe30829\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.349446 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-client-ca\") pod \"a10acd9d-2f5c-41c0-b221-65865fe30829\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.349489 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmc2p\" (UniqueName: \"kubernetes.io/projected/a10acd9d-2f5c-41c0-b221-65865fe30829-kube-api-access-vmc2p\") pod \"a10acd9d-2f5c-41c0-b221-65865fe30829\" (UID: \"a10acd9d-2f5c-41c0-b221-65865fe30829\") " Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.349652 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb823dd3-7026-4c20-8dec-73f24b23d9f5-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.349663 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42b9k\" (UniqueName: \"kubernetes.io/projected/cb823dd3-7026-4c20-8dec-73f24b23d9f5-kube-api-access-42b9k\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.349673 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb823dd3-7026-4c20-8dec-73f24b23d9f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.349681 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb823dd3-7026-4c20-8dec-73f24b23d9f5-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.350629 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a10acd9d-2f5c-41c0-b221-65865fe30829" (UID: "a10acd9d-2f5c-41c0-b221-65865fe30829"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.350663 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-client-ca" (OuterVolumeSpecName: "client-ca") pod "a10acd9d-2f5c-41c0-b221-65865fe30829" (UID: "a10acd9d-2f5c-41c0-b221-65865fe30829"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.350697 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-config" (OuterVolumeSpecName: "config") pod "a10acd9d-2f5c-41c0-b221-65865fe30829" (UID: "a10acd9d-2f5c-41c0-b221-65865fe30829"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.353870 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a10acd9d-2f5c-41c0-b221-65865fe30829-kube-api-access-vmc2p" (OuterVolumeSpecName: "kube-api-access-vmc2p") pod "a10acd9d-2f5c-41c0-b221-65865fe30829" (UID: "a10acd9d-2f5c-41c0-b221-65865fe30829"). InnerVolumeSpecName "kube-api-access-vmc2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.354197 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a10acd9d-2f5c-41c0-b221-65865fe30829-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a10acd9d-2f5c-41c0-b221-65865fe30829" (UID: "a10acd9d-2f5c-41c0-b221-65865fe30829"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.372531 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.405715 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.451631 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.451678 4932 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.451698 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a10acd9d-2f5c-41c0-b221-65865fe30829-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.451716 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a10acd9d-2f5c-41c0-b221-65865fe30829-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.451736 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmc2p\" (UniqueName: \"kubernetes.io/projected/a10acd9d-2f5c-41c0-b221-65865fe30829-kube-api-access-vmc2p\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.925317 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" event={"ID":"cb823dd3-7026-4c20-8dec-73f24b23d9f5","Type":"ContainerDied","Data":"e2a8883038eeab43da38d5bcf9fb3ee3f03931e9147fd7652ed3b803d8e18880"} Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.927037 4932 scope.go:117] "RemoveContainer" containerID="a892711fb47ba6b4bcbbb8ec95473d5a4d1c5058339cd6e0916c9dd0e3c0a2ca" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.925821 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.928524 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" event={"ID":"a10acd9d-2f5c-41c0-b221-65865fe30829","Type":"ContainerDied","Data":"9884fc5b935e7ec29f1fa3ab7fe35eb2cbfe8ccdcca7c00b3c99f77fb62e0b75"} Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.928625 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg" Feb 18 19:38:40 crc kubenswrapper[4932]: I0218 19:38:40.988718 4932 scope.go:117] "RemoveContainer" containerID="ab88a41d874ce61f48b43b162e1cf7bb6c2c2fa42ca34ca8edd7d29c53a71c40" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.005998 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg"] Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.015204 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6fb4cb5544-zwdsg"] Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.024644 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6"] Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.031366 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-877bb88d5-s6wj6"] Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.197624 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a10acd9d-2f5c-41c0-b221-65865fe30829" path="/var/lib/kubelet/pods/a10acd9d-2f5c-41c0-b221-65865fe30829/volumes" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.198827 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb823dd3-7026-4c20-8dec-73f24b23d9f5" path="/var/lib/kubelet/pods/cb823dd3-7026-4c20-8dec-73f24b23d9f5/volumes" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.357653 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.708582 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8b5db5768-4mdtv"] Feb 18 19:38:41 crc kubenswrapper[4932]: E0218 19:38:41.708954 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a10acd9d-2f5c-41c0-b221-65865fe30829" containerName="controller-manager" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.708991 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="a10acd9d-2f5c-41c0-b221-65865fe30829" containerName="controller-manager" Feb 18 19:38:41 crc kubenswrapper[4932]: E0218 19:38:41.709030 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb823dd3-7026-4c20-8dec-73f24b23d9f5" containerName="route-controller-manager" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.709050 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb823dd3-7026-4c20-8dec-73f24b23d9f5" containerName="route-controller-manager" Feb 18 19:38:41 crc kubenswrapper[4932]: E0218 19:38:41.709076 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b38b0e86-4a7b-4436-a0ef-565a61a1eab4" containerName="installer" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.709092 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="b38b0e86-4a7b-4436-a0ef-565a61a1eab4" containerName="installer" Feb 18 19:38:41 crc kubenswrapper[4932]: E0218 19:38:41.709119 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.709136 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.709397 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="a10acd9d-2f5c-41c0-b221-65865fe30829" containerName="controller-manager" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.709427 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.709455 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="b38b0e86-4a7b-4436-a0ef-565a61a1eab4" containerName="installer" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.709472 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb823dd3-7026-4c20-8dec-73f24b23d9f5" containerName="route-controller-manager" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.710223 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.719923 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.720559 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.720809 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.721416 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.722269 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.722714 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.726397 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79"] Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.727579 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.732637 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.734744 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.734867 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.735078 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.735366 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.736405 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.740270 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.756050 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79"] Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.760021 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8b5db5768-4mdtv"] Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.771006 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-proxy-ca-bundles\") pod \"controller-manager-8b5db5768-4mdtv\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.771097 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c9c4a73-3821-4c75-a01c-d7f77444ff45-serving-cert\") pod \"controller-manager-8b5db5768-4mdtv\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.771146 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-config\") pod \"route-controller-manager-878c4f777-f4d79\" (UID: \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.771225 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-config\") pod \"controller-manager-8b5db5768-4mdtv\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.771282 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2gkf\" (UniqueName: \"kubernetes.io/projected/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-kube-api-access-q2gkf\") pod \"route-controller-manager-878c4f777-f4d79\" (UID: \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.771361 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-serving-cert\") pod \"route-controller-manager-878c4f777-f4d79\" (UID: \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.771402 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-client-ca\") pod \"route-controller-manager-878c4f777-f4d79\" (UID: \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.771434 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-client-ca\") pod \"controller-manager-8b5db5768-4mdtv\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.771481 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxtnh\" (UniqueName: \"kubernetes.io/projected/3c9c4a73-3821-4c75-a01c-d7f77444ff45-kube-api-access-fxtnh\") pod \"controller-manager-8b5db5768-4mdtv\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.873093 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-serving-cert\") pod \"route-controller-manager-878c4f777-f4d79\" (UID: \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.873152 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-client-ca\") pod \"route-controller-manager-878c4f777-f4d79\" (UID: \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.873197 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-client-ca\") pod \"controller-manager-8b5db5768-4mdtv\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.873232 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxtnh\" (UniqueName: \"kubernetes.io/projected/3c9c4a73-3821-4c75-a01c-d7f77444ff45-kube-api-access-fxtnh\") pod \"controller-manager-8b5db5768-4mdtv\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.873276 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-proxy-ca-bundles\") pod \"controller-manager-8b5db5768-4mdtv\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.873304 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c9c4a73-3821-4c75-a01c-d7f77444ff45-serving-cert\") pod \"controller-manager-8b5db5768-4mdtv\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.873327 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-config\") pod \"route-controller-manager-878c4f777-f4d79\" (UID: \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.873347 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-config\") pod \"controller-manager-8b5db5768-4mdtv\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.873376 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2gkf\" (UniqueName: \"kubernetes.io/projected/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-kube-api-access-q2gkf\") pod \"route-controller-manager-878c4f777-f4d79\" (UID: \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.874852 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-proxy-ca-bundles\") pod \"controller-manager-8b5db5768-4mdtv\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.875609 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-client-ca\") pod \"controller-manager-8b5db5768-4mdtv\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.875616 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-config\") pod \"route-controller-manager-878c4f777-f4d79\" (UID: \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.875938 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-client-ca\") pod \"route-controller-manager-878c4f777-f4d79\" (UID: \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.876791 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-config\") pod \"controller-manager-8b5db5768-4mdtv\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.893920 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c9c4a73-3821-4c75-a01c-d7f77444ff45-serving-cert\") pod \"controller-manager-8b5db5768-4mdtv\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.895906 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2gkf\" (UniqueName: \"kubernetes.io/projected/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-kube-api-access-q2gkf\") pod \"route-controller-manager-878c4f777-f4d79\" (UID: \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.897481 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-serving-cert\") pod \"route-controller-manager-878c4f777-f4d79\" (UID: \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:38:41 crc kubenswrapper[4932]: I0218 19:38:41.904927 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxtnh\" (UniqueName: \"kubernetes.io/projected/3c9c4a73-3821-4c75-a01c-d7f77444ff45-kube-api-access-fxtnh\") pod \"controller-manager-8b5db5768-4mdtv\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.036302 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.059322 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.338041 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8b5db5768-4mdtv"] Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.378470 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79"] Feb 18 19:38:42 crc kubenswrapper[4932]: W0218 19:38:42.385128 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1dd6288c_f4b2_4b2d_aef1_d0c604f6b8b7.slice/crio-b9119ccfa07aee7ec222b7c0517d5bad3a2004de4cece9029bb0c05347adb1be WatchSource:0}: Error finding container b9119ccfa07aee7ec222b7c0517d5bad3a2004de4cece9029bb0c05347adb1be: Status 404 returned error can't find the container with id b9119ccfa07aee7ec222b7c0517d5bad3a2004de4cece9029bb0c05347adb1be Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.723217 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.723292 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.784897 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.784947 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.785027 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.785060 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.785049 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.785122 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.785139 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.785163 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.785283 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.785426 4932 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.785443 4932 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.785453 4932 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.785462 4932 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.792420 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.886943 4932 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.944806 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" event={"ID":"3c9c4a73-3821-4c75-a01c-d7f77444ff45","Type":"ContainerStarted","Data":"a34f9b69ea1d5920344caa95aa69b9994f98be7d6289cf2c6072102aa51e67e5"} Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.944847 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" event={"ID":"3c9c4a73-3821-4c75-a01c-d7f77444ff45","Type":"ContainerStarted","Data":"2a742d73160052609d346519668f631488172d4caaf7fdb275efa43cbb19e621"} Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.945400 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.948847 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" event={"ID":"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7","Type":"ContainerStarted","Data":"c4a1a52ed7d776b48eaf89eb50d29a223c8d5bdfa0f61a5b13ee5510278040e7"} Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.949023 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" event={"ID":"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7","Type":"ContainerStarted","Data":"b9119ccfa07aee7ec222b7c0517d5bad3a2004de4cece9029bb0c05347adb1be"} Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.949052 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.950639 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.951214 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.951262 4932 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="ccdf9016d19b3aa18d81dc6c0ce8d9980bc77507152a8b0f2f881269efd783e1" exitCode=137 Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.951303 4932 scope.go:117] "RemoveContainer" containerID="ccdf9016d19b3aa18d81dc6c0ce8d9980bc77507152a8b0f2f881269efd783e1" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.951414 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.963873 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" podStartSLOduration=3.9638580020000003 podStartE2EDuration="3.963858002s" podCreationTimestamp="2026-02-18 19:38:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:42.96090242 +0000 UTC m=+286.542857265" watchObservedRunningTime="2026-02-18 19:38:42.963858002 +0000 UTC m=+286.545812847" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.978711 4932 scope.go:117] "RemoveContainer" containerID="ccdf9016d19b3aa18d81dc6c0ce8d9980bc77507152a8b0f2f881269efd783e1" Feb 18 19:38:42 crc kubenswrapper[4932]: E0218 19:38:42.979149 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccdf9016d19b3aa18d81dc6c0ce8d9980bc77507152a8b0f2f881269efd783e1\": container with ID starting with ccdf9016d19b3aa18d81dc6c0ce8d9980bc77507152a8b0f2f881269efd783e1 not found: ID does not exist" containerID="ccdf9016d19b3aa18d81dc6c0ce8d9980bc77507152a8b0f2f881269efd783e1" Feb 18 19:38:42 crc kubenswrapper[4932]: I0218 19:38:42.979202 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccdf9016d19b3aa18d81dc6c0ce8d9980bc77507152a8b0f2f881269efd783e1"} err="failed to get container status \"ccdf9016d19b3aa18d81dc6c0ce8d9980bc77507152a8b0f2f881269efd783e1\": rpc error: code = NotFound desc = could not find container \"ccdf9016d19b3aa18d81dc6c0ce8d9980bc77507152a8b0f2f881269efd783e1\": container with ID starting with ccdf9016d19b3aa18d81dc6c0ce8d9980bc77507152a8b0f2f881269efd783e1 not found: ID does not exist" Feb 18 19:38:43 crc kubenswrapper[4932]: I0218 19:38:43.017383 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" podStartSLOduration=4.017361689 podStartE2EDuration="4.017361689s" podCreationTimestamp="2026-02-18 19:38:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:43.013886235 +0000 UTC m=+286.595841100" watchObservedRunningTime="2026-02-18 19:38:43.017361689 +0000 UTC m=+286.599316534" Feb 18 19:38:43 crc kubenswrapper[4932]: I0218 19:38:43.091319 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:38:43 crc kubenswrapper[4932]: I0218 19:38:43.197039 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 18 19:38:56 crc kubenswrapper[4932]: I0218 19:38:56.942844 4932 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 18 19:38:57 crc kubenswrapper[4932]: I0218 19:38:57.037644 4932 generic.go:334] "Generic (PLEG): container finished" podID="e39708f9-5d2d-4ed5-9243-7b71ef470ca7" containerID="e945b67be4fe05ce2000bcfc583ec12f15b7e10010995a7a48aa0c973d205d5e" exitCode=0 Feb 18 19:38:57 crc kubenswrapper[4932]: I0218 19:38:57.037694 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" event={"ID":"e39708f9-5d2d-4ed5-9243-7b71ef470ca7","Type":"ContainerDied","Data":"e945b67be4fe05ce2000bcfc583ec12f15b7e10010995a7a48aa0c973d205d5e"} Feb 18 19:38:57 crc kubenswrapper[4932]: I0218 19:38:57.038145 4932 scope.go:117] "RemoveContainer" containerID="e945b67be4fe05ce2000bcfc583ec12f15b7e10010995a7a48aa0c973d205d5e" Feb 18 19:38:58 crc kubenswrapper[4932]: I0218 19:38:58.057814 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" event={"ID":"e39708f9-5d2d-4ed5-9243-7b71ef470ca7","Type":"ContainerStarted","Data":"6db54e1d588c5e11df45c01ed2bdaafc28b0944981181eefaed60e31c6dbcafe"} Feb 18 19:38:58 crc kubenswrapper[4932]: I0218 19:38:58.059148 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" Feb 18 19:38:58 crc kubenswrapper[4932]: I0218 19:38:58.061546 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" Feb 18 19:38:59 crc kubenswrapper[4932]: I0218 19:38:59.052616 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 18 19:38:59 crc kubenswrapper[4932]: I0218 19:38:59.704777 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8b5db5768-4mdtv"] Feb 18 19:38:59 crc kubenswrapper[4932]: I0218 19:38:59.705027 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" podUID="3c9c4a73-3821-4c75-a01c-d7f77444ff45" containerName="controller-manager" containerID="cri-o://a34f9b69ea1d5920344caa95aa69b9994f98be7d6289cf2c6072102aa51e67e5" gracePeriod=30 Feb 18 19:38:59 crc kubenswrapper[4932]: I0218 19:38:59.714366 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79"] Feb 18 19:38:59 crc kubenswrapper[4932]: I0218 19:38:59.714601 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" podUID="1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7" containerName="route-controller-manager" containerID="cri-o://c4a1a52ed7d776b48eaf89eb50d29a223c8d5bdfa0f61a5b13ee5510278040e7" gracePeriod=30 Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.072662 4932 generic.go:334] "Generic (PLEG): container finished" podID="3c9c4a73-3821-4c75-a01c-d7f77444ff45" containerID="a34f9b69ea1d5920344caa95aa69b9994f98be7d6289cf2c6072102aa51e67e5" exitCode=0 Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.072810 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" event={"ID":"3c9c4a73-3821-4c75-a01c-d7f77444ff45","Type":"ContainerDied","Data":"a34f9b69ea1d5920344caa95aa69b9994f98be7d6289cf2c6072102aa51e67e5"} Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.076649 4932 generic.go:334] "Generic (PLEG): container finished" podID="1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7" containerID="c4a1a52ed7d776b48eaf89eb50d29a223c8d5bdfa0f61a5b13ee5510278040e7" exitCode=0 Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.076696 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" event={"ID":"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7","Type":"ContainerDied","Data":"c4a1a52ed7d776b48eaf89eb50d29a223c8d5bdfa0f61a5b13ee5510278040e7"} Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.299892 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.408780 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.423100 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-proxy-ca-bundles\") pod \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.423186 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-serving-cert\") pod \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\" (UID: \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\") " Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.423263 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-client-ca\") pod \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\" (UID: \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\") " Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.423305 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-config\") pod \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\" (UID: \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\") " Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.423350 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxtnh\" (UniqueName: \"kubernetes.io/projected/3c9c4a73-3821-4c75-a01c-d7f77444ff45-kube-api-access-fxtnh\") pod \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.423376 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-config\") pod \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.423397 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-client-ca\") pod \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.423434 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2gkf\" (UniqueName: \"kubernetes.io/projected/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-kube-api-access-q2gkf\") pod \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\" (UID: \"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7\") " Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.423460 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c9c4a73-3821-4c75-a01c-d7f77444ff45-serving-cert\") pod \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\" (UID: \"3c9c4a73-3821-4c75-a01c-d7f77444ff45\") " Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.424682 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3c9c4a73-3821-4c75-a01c-d7f77444ff45" (UID: "3c9c4a73-3821-4c75-a01c-d7f77444ff45"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.425554 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-client-ca" (OuterVolumeSpecName: "client-ca") pod "3c9c4a73-3821-4c75-a01c-d7f77444ff45" (UID: "3c9c4a73-3821-4c75-a01c-d7f77444ff45"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.425571 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-config" (OuterVolumeSpecName: "config") pod "1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7" (UID: "1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.425748 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-config" (OuterVolumeSpecName: "config") pod "3c9c4a73-3821-4c75-a01c-d7f77444ff45" (UID: "3c9c4a73-3821-4c75-a01c-d7f77444ff45"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.425797 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-client-ca" (OuterVolumeSpecName: "client-ca") pod "1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7" (UID: "1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.428927 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7" (UID: "1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.429243 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c9c4a73-3821-4c75-a01c-d7f77444ff45-kube-api-access-fxtnh" (OuterVolumeSpecName: "kube-api-access-fxtnh") pod "3c9c4a73-3821-4c75-a01c-d7f77444ff45" (UID: "3c9c4a73-3821-4c75-a01c-d7f77444ff45"). InnerVolumeSpecName "kube-api-access-fxtnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.429347 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-kube-api-access-q2gkf" (OuterVolumeSpecName: "kube-api-access-q2gkf") pod "1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7" (UID: "1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7"). InnerVolumeSpecName "kube-api-access-q2gkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.429848 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c9c4a73-3821-4c75-a01c-d7f77444ff45-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3c9c4a73-3821-4c75-a01c-d7f77444ff45" (UID: "3c9c4a73-3821-4c75-a01c-d7f77444ff45"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.524871 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2gkf\" (UniqueName: \"kubernetes.io/projected/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-kube-api-access-q2gkf\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.524932 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c9c4a73-3821-4c75-a01c-d7f77444ff45-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.524951 4932 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.524967 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.524983 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.524998 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.525015 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxtnh\" (UniqueName: \"kubernetes.io/projected/3c9c4a73-3821-4c75-a01c-d7f77444ff45-kube-api-access-fxtnh\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.525029 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:00 crc kubenswrapper[4932]: I0218 19:39:00.525042 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c9c4a73-3821-4c75-a01c-d7f77444ff45-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.086846 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" event={"ID":"3c9c4a73-3821-4c75-a01c-d7f77444ff45","Type":"ContainerDied","Data":"2a742d73160052609d346519668f631488172d4caaf7fdb275efa43cbb19e621"} Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.086864 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8b5db5768-4mdtv" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.087306 4932 scope.go:117] "RemoveContainer" containerID="a34f9b69ea1d5920344caa95aa69b9994f98be7d6289cf2c6072102aa51e67e5" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.089287 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" event={"ID":"1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7","Type":"ContainerDied","Data":"b9119ccfa07aee7ec222b7c0517d5bad3a2004de4cece9029bb0c05347adb1be"} Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.089600 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.108298 4932 scope.go:117] "RemoveContainer" containerID="c4a1a52ed7d776b48eaf89eb50d29a223c8d5bdfa0f61a5b13ee5510278040e7" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.138382 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8b5db5768-4mdtv"] Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.143274 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8b5db5768-4mdtv"] Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.152207 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79"] Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.161479 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-878c4f777-f4d79"] Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.185069 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7" path="/var/lib/kubelet/pods/1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7/volumes" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.185686 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c9c4a73-3821-4c75-a01c-d7f77444ff45" path="/var/lib/kubelet/pods/3c9c4a73-3821-4c75-a01c-d7f77444ff45/volumes" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.715862 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-67db6f585-dp67c"] Feb 18 19:39:01 crc kubenswrapper[4932]: E0218 19:39:01.716427 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c9c4a73-3821-4c75-a01c-d7f77444ff45" containerName="controller-manager" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.716475 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c9c4a73-3821-4c75-a01c-d7f77444ff45" containerName="controller-manager" Feb 18 19:39:01 crc kubenswrapper[4932]: E0218 19:39:01.716537 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7" containerName="route-controller-manager" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.716559 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7" containerName="route-controller-manager" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.716793 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dd6288c-f4b2-4b2d-aef1-d0c604f6b8b7" containerName="route-controller-manager" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.716845 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c9c4a73-3821-4c75-a01c-d7f77444ff45" containerName="controller-manager" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.717738 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.718460 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm"] Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.719090 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.723612 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.723982 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.725123 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.725394 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.725621 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.725853 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.726041 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.727406 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.727464 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.727488 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.729519 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.730293 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.741995 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67db6f585-dp67c"] Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.742274 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.759134 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm"] Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.843867 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16265115-064b-4308-8c41-b58e058ed40d-config\") pod \"route-controller-manager-57cffcc444-2wxkm\" (UID: \"16265115-064b-4308-8c41-b58e058ed40d\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.843944 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-serving-cert\") pod \"controller-manager-67db6f585-dp67c\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.844098 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16265115-064b-4308-8c41-b58e058ed40d-client-ca\") pod \"route-controller-manager-57cffcc444-2wxkm\" (UID: \"16265115-064b-4308-8c41-b58e058ed40d\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.844455 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5pz5\" (UniqueName: \"kubernetes.io/projected/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-kube-api-access-c5pz5\") pod \"controller-manager-67db6f585-dp67c\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.844659 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16265115-064b-4308-8c41-b58e058ed40d-serving-cert\") pod \"route-controller-manager-57cffcc444-2wxkm\" (UID: \"16265115-064b-4308-8c41-b58e058ed40d\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.844826 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plxfn\" (UniqueName: \"kubernetes.io/projected/16265115-064b-4308-8c41-b58e058ed40d-kube-api-access-plxfn\") pod \"route-controller-manager-57cffcc444-2wxkm\" (UID: \"16265115-064b-4308-8c41-b58e058ed40d\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.844990 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-client-ca\") pod \"controller-manager-67db6f585-dp67c\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.845146 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-proxy-ca-bundles\") pod \"controller-manager-67db6f585-dp67c\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.845358 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-config\") pod \"controller-manager-67db6f585-dp67c\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.946069 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16265115-064b-4308-8c41-b58e058ed40d-serving-cert\") pod \"route-controller-manager-57cffcc444-2wxkm\" (UID: \"16265115-064b-4308-8c41-b58e058ed40d\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.946128 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plxfn\" (UniqueName: \"kubernetes.io/projected/16265115-064b-4308-8c41-b58e058ed40d-kube-api-access-plxfn\") pod \"route-controller-manager-57cffcc444-2wxkm\" (UID: \"16265115-064b-4308-8c41-b58e058ed40d\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.946157 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-client-ca\") pod \"controller-manager-67db6f585-dp67c\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.946196 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-proxy-ca-bundles\") pod \"controller-manager-67db6f585-dp67c\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.946220 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-config\") pod \"controller-manager-67db6f585-dp67c\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.946252 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16265115-064b-4308-8c41-b58e058ed40d-config\") pod \"route-controller-manager-57cffcc444-2wxkm\" (UID: \"16265115-064b-4308-8c41-b58e058ed40d\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.946282 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-serving-cert\") pod \"controller-manager-67db6f585-dp67c\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.946306 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16265115-064b-4308-8c41-b58e058ed40d-client-ca\") pod \"route-controller-manager-57cffcc444-2wxkm\" (UID: \"16265115-064b-4308-8c41-b58e058ed40d\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.946335 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5pz5\" (UniqueName: \"kubernetes.io/projected/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-kube-api-access-c5pz5\") pod \"controller-manager-67db6f585-dp67c\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.947432 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-client-ca\") pod \"controller-manager-67db6f585-dp67c\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.947768 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16265115-064b-4308-8c41-b58e058ed40d-client-ca\") pod \"route-controller-manager-57cffcc444-2wxkm\" (UID: \"16265115-064b-4308-8c41-b58e058ed40d\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.948453 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16265115-064b-4308-8c41-b58e058ed40d-config\") pod \"route-controller-manager-57cffcc444-2wxkm\" (UID: \"16265115-064b-4308-8c41-b58e058ed40d\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.949482 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-proxy-ca-bundles\") pod \"controller-manager-67db6f585-dp67c\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.949845 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-config\") pod \"controller-manager-67db6f585-dp67c\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.950871 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16265115-064b-4308-8c41-b58e058ed40d-serving-cert\") pod \"route-controller-manager-57cffcc444-2wxkm\" (UID: \"16265115-064b-4308-8c41-b58e058ed40d\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.951656 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-serving-cert\") pod \"controller-manager-67db6f585-dp67c\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.970693 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plxfn\" (UniqueName: \"kubernetes.io/projected/16265115-064b-4308-8c41-b58e058ed40d-kube-api-access-plxfn\") pod \"route-controller-manager-57cffcc444-2wxkm\" (UID: \"16265115-064b-4308-8c41-b58e058ed40d\") " pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:01 crc kubenswrapper[4932]: I0218 19:39:01.972928 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5pz5\" (UniqueName: \"kubernetes.io/projected/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-kube-api-access-c5pz5\") pod \"controller-manager-67db6f585-dp67c\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:02 crc kubenswrapper[4932]: I0218 19:39:02.059825 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:02 crc kubenswrapper[4932]: I0218 19:39:02.073285 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:02 crc kubenswrapper[4932]: I0218 19:39:02.572022 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67db6f585-dp67c"] Feb 18 19:39:02 crc kubenswrapper[4932]: W0218 19:39:02.586958 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93e7ae54_b8ce_4890_9901_514d2f4b7f0a.slice/crio-69e9e84f33df4568345d5c91cf10b5627bbd157605a893896cb062f70472ec3a WatchSource:0}: Error finding container 69e9e84f33df4568345d5c91cf10b5627bbd157605a893896cb062f70472ec3a: Status 404 returned error can't find the container with id 69e9e84f33df4568345d5c91cf10b5627bbd157605a893896cb062f70472ec3a Feb 18 19:39:02 crc kubenswrapper[4932]: I0218 19:39:02.618116 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm"] Feb 18 19:39:02 crc kubenswrapper[4932]: W0218 19:39:02.624642 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16265115_064b_4308_8c41_b58e058ed40d.slice/crio-b8a23dcbd1d8cf02657c91f0e61f4f45339881b0ffae9e25a18d47cdddc65610 WatchSource:0}: Error finding container b8a23dcbd1d8cf02657c91f0e61f4f45339881b0ffae9e25a18d47cdddc65610: Status 404 returned error can't find the container with id b8a23dcbd1d8cf02657c91f0e61f4f45339881b0ffae9e25a18d47cdddc65610 Feb 18 19:39:02 crc kubenswrapper[4932]: I0218 19:39:02.902527 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 18 19:39:03 crc kubenswrapper[4932]: I0218 19:39:03.105426 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" event={"ID":"93e7ae54-b8ce-4890-9901-514d2f4b7f0a","Type":"ContainerStarted","Data":"250d8c6fe62715b714c699df51552ef5ce43496f68a287cae2c7849c5452f06b"} Feb 18 19:39:03 crc kubenswrapper[4932]: I0218 19:39:03.105745 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:03 crc kubenswrapper[4932]: I0218 19:39:03.105756 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" event={"ID":"93e7ae54-b8ce-4890-9901-514d2f4b7f0a","Type":"ContainerStarted","Data":"69e9e84f33df4568345d5c91cf10b5627bbd157605a893896cb062f70472ec3a"} Feb 18 19:39:03 crc kubenswrapper[4932]: I0218 19:39:03.106645 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" event={"ID":"16265115-064b-4308-8c41-b58e058ed40d","Type":"ContainerStarted","Data":"3f7b74fac2a032d3567d6e305c53304299a33545185746dd3b1ef2ca283a7fbf"} Feb 18 19:39:03 crc kubenswrapper[4932]: I0218 19:39:03.106678 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" event={"ID":"16265115-064b-4308-8c41-b58e058ed40d","Type":"ContainerStarted","Data":"b8a23dcbd1d8cf02657c91f0e61f4f45339881b0ffae9e25a18d47cdddc65610"} Feb 18 19:39:03 crc kubenswrapper[4932]: I0218 19:39:03.107126 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:03 crc kubenswrapper[4932]: I0218 19:39:03.112289 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:39:03 crc kubenswrapper[4932]: I0218 19:39:03.112596 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:03 crc kubenswrapper[4932]: I0218 19:39:03.124870 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" podStartSLOduration=4.124851028 podStartE2EDuration="4.124851028s" podCreationTimestamp="2026-02-18 19:38:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:39:03.121044105 +0000 UTC m=+306.702998940" watchObservedRunningTime="2026-02-18 19:39:03.124851028 +0000 UTC m=+306.706805883" Feb 18 19:39:03 crc kubenswrapper[4932]: I0218 19:39:03.137209 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" podStartSLOduration=4.137189057 podStartE2EDuration="4.137189057s" podCreationTimestamp="2026-02-18 19:38:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:39:03.133900527 +0000 UTC m=+306.715855372" watchObservedRunningTime="2026-02-18 19:39:03.137189057 +0000 UTC m=+306.719143892" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.383847 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5kmmh"] Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.384952 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.400031 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5kmmh"] Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.498998 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a1a50912-ee96-4a51-8ad1-49a83e229618-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.499367 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1a50912-ee96-4a51-8ad1-49a83e229618-trusted-ca\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.499402 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a1a50912-ee96-4a51-8ad1-49a83e229618-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.499420 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a1a50912-ee96-4a51-8ad1-49a83e229618-registry-certificates\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.499446 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.499603 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a1a50912-ee96-4a51-8ad1-49a83e229618-registry-tls\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.499652 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqdkx\" (UniqueName: \"kubernetes.io/projected/a1a50912-ee96-4a51-8ad1-49a83e229618-kube-api-access-mqdkx\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.499679 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a1a50912-ee96-4a51-8ad1-49a83e229618-bound-sa-token\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.517875 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.600507 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a1a50912-ee96-4a51-8ad1-49a83e229618-registry-tls\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.600541 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqdkx\" (UniqueName: \"kubernetes.io/projected/a1a50912-ee96-4a51-8ad1-49a83e229618-kube-api-access-mqdkx\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.600559 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a1a50912-ee96-4a51-8ad1-49a83e229618-bound-sa-token\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.600584 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a1a50912-ee96-4a51-8ad1-49a83e229618-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.600624 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1a50912-ee96-4a51-8ad1-49a83e229618-trusted-ca\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.600666 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a1a50912-ee96-4a51-8ad1-49a83e229618-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.600682 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a1a50912-ee96-4a51-8ad1-49a83e229618-registry-certificates\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.601740 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a1a50912-ee96-4a51-8ad1-49a83e229618-registry-certificates\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.603547 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a1a50912-ee96-4a51-8ad1-49a83e229618-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.604775 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1a50912-ee96-4a51-8ad1-49a83e229618-trusted-ca\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.615705 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a1a50912-ee96-4a51-8ad1-49a83e229618-registry-tls\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.619581 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a1a50912-ee96-4a51-8ad1-49a83e229618-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.620518 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqdkx\" (UniqueName: \"kubernetes.io/projected/a1a50912-ee96-4a51-8ad1-49a83e229618-kube-api-access-mqdkx\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.624727 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a1a50912-ee96-4a51-8ad1-49a83e229618-bound-sa-token\") pod \"image-registry-66df7c8f76-5kmmh\" (UID: \"a1a50912-ee96-4a51-8ad1-49a83e229618\") " pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:05 crc kubenswrapper[4932]: I0218 19:39:05.707075 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:06 crc kubenswrapper[4932]: I0218 19:39:06.096480 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5kmmh"] Feb 18 19:39:06 crc kubenswrapper[4932]: W0218 19:39:06.099916 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1a50912_ee96_4a51_8ad1_49a83e229618.slice/crio-8f70847d27880691b67883ba9738b19c7187ede4de1e897ffb8f7923a9c8a4c2 WatchSource:0}: Error finding container 8f70847d27880691b67883ba9738b19c7187ede4de1e897ffb8f7923a9c8a4c2: Status 404 returned error can't find the container with id 8f70847d27880691b67883ba9738b19c7187ede4de1e897ffb8f7923a9c8a4c2 Feb 18 19:39:06 crc kubenswrapper[4932]: I0218 19:39:06.126751 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" event={"ID":"a1a50912-ee96-4a51-8ad1-49a83e229618","Type":"ContainerStarted","Data":"8f70847d27880691b67883ba9738b19c7187ede4de1e897ffb8f7923a9c8a4c2"} Feb 18 19:39:07 crc kubenswrapper[4932]: I0218 19:39:07.134291 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" event={"ID":"a1a50912-ee96-4a51-8ad1-49a83e229618","Type":"ContainerStarted","Data":"5f6ea72debe2de4aaf5d2b14a806ea124708968c517e218ddb73f15a1487b163"} Feb 18 19:39:07 crc kubenswrapper[4932]: I0218 19:39:07.134678 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:07 crc kubenswrapper[4932]: I0218 19:39:07.153165 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" podStartSLOduration=2.153149621 podStartE2EDuration="2.153149621s" podCreationTimestamp="2026-02-18 19:39:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:39:07.150962618 +0000 UTC m=+310.732917513" watchObservedRunningTime="2026-02-18 19:39:07.153149621 +0000 UTC m=+310.735104466" Feb 18 19:39:19 crc kubenswrapper[4932]: I0218 19:39:19.743108 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm"] Feb 18 19:39:19 crc kubenswrapper[4932]: I0218 19:39:19.744115 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" podUID="16265115-064b-4308-8c41-b58e058ed40d" containerName="route-controller-manager" containerID="cri-o://3f7b74fac2a032d3567d6e305c53304299a33545185746dd3b1ef2ca283a7fbf" gracePeriod=30 Feb 18 19:39:20 crc kubenswrapper[4932]: I0218 19:39:20.213727 4932 generic.go:334] "Generic (PLEG): container finished" podID="16265115-064b-4308-8c41-b58e058ed40d" containerID="3f7b74fac2a032d3567d6e305c53304299a33545185746dd3b1ef2ca283a7fbf" exitCode=0 Feb 18 19:39:20 crc kubenswrapper[4932]: I0218 19:39:20.213864 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" event={"ID":"16265115-064b-4308-8c41-b58e058ed40d","Type":"ContainerDied","Data":"3f7b74fac2a032d3567d6e305c53304299a33545185746dd3b1ef2ca283a7fbf"} Feb 18 19:39:20 crc kubenswrapper[4932]: I0218 19:39:20.428516 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:20 crc kubenswrapper[4932]: I0218 19:39:20.511651 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16265115-064b-4308-8c41-b58e058ed40d-config\") pod \"16265115-064b-4308-8c41-b58e058ed40d\" (UID: \"16265115-064b-4308-8c41-b58e058ed40d\") " Feb 18 19:39:20 crc kubenswrapper[4932]: I0218 19:39:20.511795 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plxfn\" (UniqueName: \"kubernetes.io/projected/16265115-064b-4308-8c41-b58e058ed40d-kube-api-access-plxfn\") pod \"16265115-064b-4308-8c41-b58e058ed40d\" (UID: \"16265115-064b-4308-8c41-b58e058ed40d\") " Feb 18 19:39:20 crc kubenswrapper[4932]: I0218 19:39:20.511836 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16265115-064b-4308-8c41-b58e058ed40d-serving-cert\") pod \"16265115-064b-4308-8c41-b58e058ed40d\" (UID: \"16265115-064b-4308-8c41-b58e058ed40d\") " Feb 18 19:39:20 crc kubenswrapper[4932]: I0218 19:39:20.511881 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16265115-064b-4308-8c41-b58e058ed40d-client-ca\") pod \"16265115-064b-4308-8c41-b58e058ed40d\" (UID: \"16265115-064b-4308-8c41-b58e058ed40d\") " Feb 18 19:39:20 crc kubenswrapper[4932]: I0218 19:39:20.512805 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16265115-064b-4308-8c41-b58e058ed40d-client-ca" (OuterVolumeSpecName: "client-ca") pod "16265115-064b-4308-8c41-b58e058ed40d" (UID: "16265115-064b-4308-8c41-b58e058ed40d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:20 crc kubenswrapper[4932]: I0218 19:39:20.513969 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16265115-064b-4308-8c41-b58e058ed40d-config" (OuterVolumeSpecName: "config") pod "16265115-064b-4308-8c41-b58e058ed40d" (UID: "16265115-064b-4308-8c41-b58e058ed40d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:20 crc kubenswrapper[4932]: I0218 19:39:20.518569 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16265115-064b-4308-8c41-b58e058ed40d-kube-api-access-plxfn" (OuterVolumeSpecName: "kube-api-access-plxfn") pod "16265115-064b-4308-8c41-b58e058ed40d" (UID: "16265115-064b-4308-8c41-b58e058ed40d"). InnerVolumeSpecName "kube-api-access-plxfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:20 crc kubenswrapper[4932]: I0218 19:39:20.518979 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16265115-064b-4308-8c41-b58e058ed40d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16265115-064b-4308-8c41-b58e058ed40d" (UID: "16265115-064b-4308-8c41-b58e058ed40d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:20 crc kubenswrapper[4932]: I0218 19:39:20.613929 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plxfn\" (UniqueName: \"kubernetes.io/projected/16265115-064b-4308-8c41-b58e058ed40d-kube-api-access-plxfn\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:20 crc kubenswrapper[4932]: I0218 19:39:20.614202 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16265115-064b-4308-8c41-b58e058ed40d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:20 crc kubenswrapper[4932]: I0218 19:39:20.614213 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16265115-064b-4308-8c41-b58e058ed40d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:20 crc kubenswrapper[4932]: I0218 19:39:20.614222 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16265115-064b-4308-8c41-b58e058ed40d-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.219788 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" event={"ID":"16265115-064b-4308-8c41-b58e058ed40d","Type":"ContainerDied","Data":"b8a23dcbd1d8cf02657c91f0e61f4f45339881b0ffae9e25a18d47cdddc65610"} Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.219853 4932 scope.go:117] "RemoveContainer" containerID="3f7b74fac2a032d3567d6e305c53304299a33545185746dd3b1ef2ca283a7fbf" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.219864 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.242732 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm"] Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.248658 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57cffcc444-2wxkm"] Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.736088 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf"] Feb 18 19:39:21 crc kubenswrapper[4932]: E0218 19:39:21.736570 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16265115-064b-4308-8c41-b58e058ed40d" containerName="route-controller-manager" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.736612 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="16265115-064b-4308-8c41-b58e058ed40d" containerName="route-controller-manager" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.736868 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="16265115-064b-4308-8c41-b58e058ed40d" containerName="route-controller-manager" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.737718 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.739944 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.742961 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.743392 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.744419 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.744793 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.745520 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.747877 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf"] Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.832910 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd7wb\" (UniqueName: \"kubernetes.io/projected/bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed-kube-api-access-fd7wb\") pod \"route-controller-manager-878c4f777-qmvrf\" (UID: \"bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.833037 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed-serving-cert\") pod \"route-controller-manager-878c4f777-qmvrf\" (UID: \"bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.833084 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed-config\") pod \"route-controller-manager-878c4f777-qmvrf\" (UID: \"bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.833127 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed-client-ca\") pod \"route-controller-manager-878c4f777-qmvrf\" (UID: \"bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.934730 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed-client-ca\") pod \"route-controller-manager-878c4f777-qmvrf\" (UID: \"bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.934839 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd7wb\" (UniqueName: \"kubernetes.io/projected/bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed-kube-api-access-fd7wb\") pod \"route-controller-manager-878c4f777-qmvrf\" (UID: \"bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.934922 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed-serving-cert\") pod \"route-controller-manager-878c4f777-qmvrf\" (UID: \"bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.934954 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed-config\") pod \"route-controller-manager-878c4f777-qmvrf\" (UID: \"bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.936520 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed-client-ca\") pod \"route-controller-manager-878c4f777-qmvrf\" (UID: \"bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.936575 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed-config\") pod \"route-controller-manager-878c4f777-qmvrf\" (UID: \"bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.949127 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed-serving-cert\") pod \"route-controller-manager-878c4f777-qmvrf\" (UID: \"bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" Feb 18 19:39:21 crc kubenswrapper[4932]: I0218 19:39:21.971814 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd7wb\" (UniqueName: \"kubernetes.io/projected/bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed-kube-api-access-fd7wb\") pod \"route-controller-manager-878c4f777-qmvrf\" (UID: \"bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed\") " pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" Feb 18 19:39:22 crc kubenswrapper[4932]: I0218 19:39:22.060465 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" Feb 18 19:39:22 crc kubenswrapper[4932]: I0218 19:39:22.571929 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf"] Feb 18 19:39:22 crc kubenswrapper[4932]: W0218 19:39:22.572312 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc2d4d85_8ed9_4c7b_bc43_d8120f8c85ed.slice/crio-fecc0f0c2e2081b30c8893cc5db2b3e3dd9207810d289e3b797e80f0344a03a1 WatchSource:0}: Error finding container fecc0f0c2e2081b30c8893cc5db2b3e3dd9207810d289e3b797e80f0344a03a1: Status 404 returned error can't find the container with id fecc0f0c2e2081b30c8893cc5db2b3e3dd9207810d289e3b797e80f0344a03a1 Feb 18 19:39:23 crc kubenswrapper[4932]: I0218 19:39:23.190371 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16265115-064b-4308-8c41-b58e058ed40d" path="/var/lib/kubelet/pods/16265115-064b-4308-8c41-b58e058ed40d/volumes" Feb 18 19:39:23 crc kubenswrapper[4932]: I0218 19:39:23.255452 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" event={"ID":"bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed","Type":"ContainerStarted","Data":"10c4a79a1ec9f093dd19c9ad8769bd988f1ea90dbe807ac6b81d666fb30e9743"} Feb 18 19:39:23 crc kubenswrapper[4932]: I0218 19:39:23.255540 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" event={"ID":"bc2d4d85-8ed9-4c7b-bc43-d8120f8c85ed","Type":"ContainerStarted","Data":"fecc0f0c2e2081b30c8893cc5db2b3e3dd9207810d289e3b797e80f0344a03a1"} Feb 18 19:39:23 crc kubenswrapper[4932]: I0218 19:39:23.255736 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" Feb 18 19:39:23 crc kubenswrapper[4932]: I0218 19:39:23.285000 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" podStartSLOduration=4.284969997 podStartE2EDuration="4.284969997s" podCreationTimestamp="2026-02-18 19:39:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:39:23.283888261 +0000 UTC m=+326.865843106" watchObservedRunningTime="2026-02-18 19:39:23.284969997 +0000 UTC m=+326.866924882" Feb 18 19:39:23 crc kubenswrapper[4932]: I0218 19:39:23.328807 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-878c4f777-qmvrf" Feb 18 19:39:25 crc kubenswrapper[4932]: I0218 19:39:25.715052 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-5kmmh" Feb 18 19:39:25 crc kubenswrapper[4932]: I0218 19:39:25.787968 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wlcbj"] Feb 18 19:39:50 crc kubenswrapper[4932]: I0218 19:39:50.836124 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" podUID="fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22" containerName="registry" containerID="cri-o://977e5ae481b66aa029660ffb648170ce708da73076563e53e12c31c8a6b9455c" gracePeriod=30 Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.285580 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.378959 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-ca-trust-extracted\") pod \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.379025 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-bound-sa-token\") pod \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.379063 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-registry-certificates\") pod \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.379118 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxwvv\" (UniqueName: \"kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-kube-api-access-kxwvv\") pod \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.379151 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-registry-tls\") pod \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.379200 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-trusted-ca\") pod \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.379231 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-installation-pull-secrets\") pod \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.379380 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\" (UID: \"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22\") " Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.380007 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.380057 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.385933 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.385947 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-kube-api-access-kxwvv" (OuterVolumeSpecName: "kube-api-access-kxwvv") pod "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22"). InnerVolumeSpecName "kube-api-access-kxwvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.386362 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.386681 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.395162 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.395753 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22" (UID: "fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.451341 4932 generic.go:334] "Generic (PLEG): container finished" podID="fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22" containerID="977e5ae481b66aa029660ffb648170ce708da73076563e53e12c31c8a6b9455c" exitCode=0 Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.451400 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.451395 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" event={"ID":"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22","Type":"ContainerDied","Data":"977e5ae481b66aa029660ffb648170ce708da73076563e53e12c31c8a6b9455c"} Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.451450 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wlcbj" event={"ID":"fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22","Type":"ContainerDied","Data":"e2cd6e9fe7b91c0ea246bc59cf9d11b75cc0eb7a103b52573fd6adf6936ac914"} Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.451473 4932 scope.go:117] "RemoveContainer" containerID="977e5ae481b66aa029660ffb648170ce708da73076563e53e12c31c8a6b9455c" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.471771 4932 scope.go:117] "RemoveContainer" containerID="977e5ae481b66aa029660ffb648170ce708da73076563e53e12c31c8a6b9455c" Feb 18 19:39:51 crc kubenswrapper[4932]: E0218 19:39:51.472378 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"977e5ae481b66aa029660ffb648170ce708da73076563e53e12c31c8a6b9455c\": container with ID starting with 977e5ae481b66aa029660ffb648170ce708da73076563e53e12c31c8a6b9455c not found: ID does not exist" containerID="977e5ae481b66aa029660ffb648170ce708da73076563e53e12c31c8a6b9455c" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.472448 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"977e5ae481b66aa029660ffb648170ce708da73076563e53e12c31c8a6b9455c"} err="failed to get container status \"977e5ae481b66aa029660ffb648170ce708da73076563e53e12c31c8a6b9455c\": rpc error: code = NotFound desc = could not find container \"977e5ae481b66aa029660ffb648170ce708da73076563e53e12c31c8a6b9455c\": container with ID starting with 977e5ae481b66aa029660ffb648170ce708da73076563e53e12c31c8a6b9455c not found: ID does not exist" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.481146 4932 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.481199 4932 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.481211 4932 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.481222 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxwvv\" (UniqueName: \"kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-kube-api-access-kxwvv\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.481234 4932 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.481244 4932 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.481254 4932 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.491311 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wlcbj"] Feb 18 19:39:51 crc kubenswrapper[4932]: I0218 19:39:51.497430 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wlcbj"] Feb 18 19:39:53 crc kubenswrapper[4932]: I0218 19:39:53.189071 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22" path="/var/lib/kubelet/pods/fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22/volumes" Feb 18 19:39:57 crc kubenswrapper[4932]: I0218 19:39:57.606421 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:39:57 crc kubenswrapper[4932]: I0218 19:39:57.607052 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.037157 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qvwc8"] Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.037947 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qvwc8" podUID="cafe1e82-ef19-4345-825e-cc9bf016b353" containerName="registry-server" containerID="cri-o://6498dec7f3004ba8f78c5dff3be4a4dafeba91d1de501891218f69f8d9282e26" gracePeriod=30 Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.050920 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j2xgw"] Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.051279 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j2xgw" podUID="62bbf001-ce57-471f-ad28-1d892d0d30e9" containerName="registry-server" containerID="cri-o://13c758cf33ac2064fd2a2bac98c4ca52868f7188bbf8e3e8b926c0341705af4b" gracePeriod=30 Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.063199 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5c79p"] Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.063516 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" podUID="e39708f9-5d2d-4ed5-9243-7b71ef470ca7" containerName="marketplace-operator" containerID="cri-o://6db54e1d588c5e11df45c01ed2bdaafc28b0944981181eefaed60e31c6dbcafe" gracePeriod=30 Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.072566 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4w2tj"] Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.072827 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4w2tj" podUID="b77a623a-ff2e-45aa-9004-b211b0200a3f" containerName="registry-server" containerID="cri-o://8629bd2837aebb06f17bda76bfe6b4989212f8b67eec3674f76174649de59a2e" gracePeriod=30 Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.085536 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-44vtg"] Feb 18 19:39:59 crc kubenswrapper[4932]: E0218 19:39:59.085933 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22" containerName="registry" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.085970 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22" containerName="registry" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.086145 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe8fcd2b-53e6-4ccc-ac6e-c8850ce0ba22" containerName="registry" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.086880 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-44vtg" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.091824 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-chh8j"] Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.092135 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-chh8j" podUID="ce921030-ec82-420d-a9e7-cd04ee7e055b" containerName="registry-server" containerID="cri-o://e34e27d0659e0d99e6372515305dc5e1613a602751683fd615bb6bd8747d32f2" gracePeriod=30 Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.105582 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-44vtg"] Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.190255 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnzpc\" (UniqueName: \"kubernetes.io/projected/58ed1571-b94a-4792-9c8f-ead2f0596e42-kube-api-access-lnzpc\") pod \"marketplace-operator-79b997595-44vtg\" (UID: \"58ed1571-b94a-4792-9c8f-ead2f0596e42\") " pod="openshift-marketplace/marketplace-operator-79b997595-44vtg" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.190301 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58ed1571-b94a-4792-9c8f-ead2f0596e42-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-44vtg\" (UID: \"58ed1571-b94a-4792-9c8f-ead2f0596e42\") " pod="openshift-marketplace/marketplace-operator-79b997595-44vtg" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.190332 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58ed1571-b94a-4792-9c8f-ead2f0596e42-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-44vtg\" (UID: \"58ed1571-b94a-4792-9c8f-ead2f0596e42\") " pod="openshift-marketplace/marketplace-operator-79b997595-44vtg" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.291370 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnzpc\" (UniqueName: \"kubernetes.io/projected/58ed1571-b94a-4792-9c8f-ead2f0596e42-kube-api-access-lnzpc\") pod \"marketplace-operator-79b997595-44vtg\" (UID: \"58ed1571-b94a-4792-9c8f-ead2f0596e42\") " pod="openshift-marketplace/marketplace-operator-79b997595-44vtg" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.291426 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58ed1571-b94a-4792-9c8f-ead2f0596e42-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-44vtg\" (UID: \"58ed1571-b94a-4792-9c8f-ead2f0596e42\") " pod="openshift-marketplace/marketplace-operator-79b997595-44vtg" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.291461 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58ed1571-b94a-4792-9c8f-ead2f0596e42-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-44vtg\" (UID: \"58ed1571-b94a-4792-9c8f-ead2f0596e42\") " pod="openshift-marketplace/marketplace-operator-79b997595-44vtg" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.292796 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58ed1571-b94a-4792-9c8f-ead2f0596e42-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-44vtg\" (UID: \"58ed1571-b94a-4792-9c8f-ead2f0596e42\") " pod="openshift-marketplace/marketplace-operator-79b997595-44vtg" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.301357 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58ed1571-b94a-4792-9c8f-ead2f0596e42-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-44vtg\" (UID: \"58ed1571-b94a-4792-9c8f-ead2f0596e42\") " pod="openshift-marketplace/marketplace-operator-79b997595-44vtg" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.310639 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnzpc\" (UniqueName: \"kubernetes.io/projected/58ed1571-b94a-4792-9c8f-ead2f0596e42-kube-api-access-lnzpc\") pod \"marketplace-operator-79b997595-44vtg\" (UID: \"58ed1571-b94a-4792-9c8f-ead2f0596e42\") " pod="openshift-marketplace/marketplace-operator-79b997595-44vtg" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.465146 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-44vtg" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.483374 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.502861 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qvwc8" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.511895 4932 generic.go:334] "Generic (PLEG): container finished" podID="b77a623a-ff2e-45aa-9004-b211b0200a3f" containerID="8629bd2837aebb06f17bda76bfe6b4989212f8b67eec3674f76174649de59a2e" exitCode=0 Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.511999 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4w2tj" event={"ID":"b77a623a-ff2e-45aa-9004-b211b0200a3f","Type":"ContainerDied","Data":"8629bd2837aebb06f17bda76bfe6b4989212f8b67eec3674f76174649de59a2e"} Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.513829 4932 generic.go:334] "Generic (PLEG): container finished" podID="e39708f9-5d2d-4ed5-9243-7b71ef470ca7" containerID="6db54e1d588c5e11df45c01ed2bdaafc28b0944981181eefaed60e31c6dbcafe" exitCode=0 Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.513919 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" event={"ID":"e39708f9-5d2d-4ed5-9243-7b71ef470ca7","Type":"ContainerDied","Data":"6db54e1d588c5e11df45c01ed2bdaafc28b0944981181eefaed60e31c6dbcafe"} Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.513953 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" event={"ID":"e39708f9-5d2d-4ed5-9243-7b71ef470ca7","Type":"ContainerDied","Data":"784badddcd9797871fec35aacb4b375a077788de958864c50c207fa8ea3d3eb2"} Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.513979 4932 scope.go:117] "RemoveContainer" containerID="6db54e1d588c5e11df45c01ed2bdaafc28b0944981181eefaed60e31c6dbcafe" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.514374 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5c79p" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.524313 4932 generic.go:334] "Generic (PLEG): container finished" podID="cafe1e82-ef19-4345-825e-cc9bf016b353" containerID="6498dec7f3004ba8f78c5dff3be4a4dafeba91d1de501891218f69f8d9282e26" exitCode=0 Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.524788 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qvwc8" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.524792 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qvwc8" event={"ID":"cafe1e82-ef19-4345-825e-cc9bf016b353","Type":"ContainerDied","Data":"6498dec7f3004ba8f78c5dff3be4a4dafeba91d1de501891218f69f8d9282e26"} Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.525102 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qvwc8" event={"ID":"cafe1e82-ef19-4345-825e-cc9bf016b353","Type":"ContainerDied","Data":"94c56c7588969970298ca76c9989e0d42da323b423ba2e42eec0825109130ea6"} Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.537663 4932 scope.go:117] "RemoveContainer" containerID="e945b67be4fe05ce2000bcfc583ec12f15b7e10010995a7a48aa0c973d205d5e" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.540596 4932 generic.go:334] "Generic (PLEG): container finished" podID="ce921030-ec82-420d-a9e7-cd04ee7e055b" containerID="e34e27d0659e0d99e6372515305dc5e1613a602751683fd615bb6bd8747d32f2" exitCode=0 Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.540668 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chh8j" event={"ID":"ce921030-ec82-420d-a9e7-cd04ee7e055b","Type":"ContainerDied","Data":"e34e27d0659e0d99e6372515305dc5e1613a602751683fd615bb6bd8747d32f2"} Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.544092 4932 generic.go:334] "Generic (PLEG): container finished" podID="62bbf001-ce57-471f-ad28-1d892d0d30e9" containerID="13c758cf33ac2064fd2a2bac98c4ca52868f7188bbf8e3e8b926c0341705af4b" exitCode=0 Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.544115 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2xgw" event={"ID":"62bbf001-ce57-471f-ad28-1d892d0d30e9","Type":"ContainerDied","Data":"13c758cf33ac2064fd2a2bac98c4ca52868f7188bbf8e3e8b926c0341705af4b"} Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.576031 4932 scope.go:117] "RemoveContainer" containerID="6db54e1d588c5e11df45c01ed2bdaafc28b0944981181eefaed60e31c6dbcafe" Feb 18 19:39:59 crc kubenswrapper[4932]: E0218 19:39:59.577928 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6db54e1d588c5e11df45c01ed2bdaafc28b0944981181eefaed60e31c6dbcafe\": container with ID starting with 6db54e1d588c5e11df45c01ed2bdaafc28b0944981181eefaed60e31c6dbcafe not found: ID does not exist" containerID="6db54e1d588c5e11df45c01ed2bdaafc28b0944981181eefaed60e31c6dbcafe" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.577964 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6db54e1d588c5e11df45c01ed2bdaafc28b0944981181eefaed60e31c6dbcafe"} err="failed to get container status \"6db54e1d588c5e11df45c01ed2bdaafc28b0944981181eefaed60e31c6dbcafe\": rpc error: code = NotFound desc = could not find container \"6db54e1d588c5e11df45c01ed2bdaafc28b0944981181eefaed60e31c6dbcafe\": container with ID starting with 6db54e1d588c5e11df45c01ed2bdaafc28b0944981181eefaed60e31c6dbcafe not found: ID does not exist" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.577989 4932 scope.go:117] "RemoveContainer" containerID="e945b67be4fe05ce2000bcfc583ec12f15b7e10010995a7a48aa0c973d205d5e" Feb 18 19:39:59 crc kubenswrapper[4932]: E0218 19:39:59.578258 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e945b67be4fe05ce2000bcfc583ec12f15b7e10010995a7a48aa0c973d205d5e\": container with ID starting with e945b67be4fe05ce2000bcfc583ec12f15b7e10010995a7a48aa0c973d205d5e not found: ID does not exist" containerID="e945b67be4fe05ce2000bcfc583ec12f15b7e10010995a7a48aa0c973d205d5e" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.578281 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e945b67be4fe05ce2000bcfc583ec12f15b7e10010995a7a48aa0c973d205d5e"} err="failed to get container status \"e945b67be4fe05ce2000bcfc583ec12f15b7e10010995a7a48aa0c973d205d5e\": rpc error: code = NotFound desc = could not find container \"e945b67be4fe05ce2000bcfc583ec12f15b7e10010995a7a48aa0c973d205d5e\": container with ID starting with e945b67be4fe05ce2000bcfc583ec12f15b7e10010995a7a48aa0c973d205d5e not found: ID does not exist" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.578294 4932 scope.go:117] "RemoveContainer" containerID="6498dec7f3004ba8f78c5dff3be4a4dafeba91d1de501891218f69f8d9282e26" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.594880 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cafe1e82-ef19-4345-825e-cc9bf016b353-utilities\") pod \"cafe1e82-ef19-4345-825e-cc9bf016b353\" (UID: \"cafe1e82-ef19-4345-825e-cc9bf016b353\") " Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.594932 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbm4m\" (UniqueName: \"kubernetes.io/projected/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-kube-api-access-rbm4m\") pod \"e39708f9-5d2d-4ed5-9243-7b71ef470ca7\" (UID: \"e39708f9-5d2d-4ed5-9243-7b71ef470ca7\") " Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.594951 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-marketplace-trusted-ca\") pod \"e39708f9-5d2d-4ed5-9243-7b71ef470ca7\" (UID: \"e39708f9-5d2d-4ed5-9243-7b71ef470ca7\") " Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.595007 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sr45c\" (UniqueName: \"kubernetes.io/projected/cafe1e82-ef19-4345-825e-cc9bf016b353-kube-api-access-sr45c\") pod \"cafe1e82-ef19-4345-825e-cc9bf016b353\" (UID: \"cafe1e82-ef19-4345-825e-cc9bf016b353\") " Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.595037 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-marketplace-operator-metrics\") pod \"e39708f9-5d2d-4ed5-9243-7b71ef470ca7\" (UID: \"e39708f9-5d2d-4ed5-9243-7b71ef470ca7\") " Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.595072 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cafe1e82-ef19-4345-825e-cc9bf016b353-catalog-content\") pod \"cafe1e82-ef19-4345-825e-cc9bf016b353\" (UID: \"cafe1e82-ef19-4345-825e-cc9bf016b353\") " Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.597550 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "e39708f9-5d2d-4ed5-9243-7b71ef470ca7" (UID: "e39708f9-5d2d-4ed5-9243-7b71ef470ca7"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.599889 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-kube-api-access-rbm4m" (OuterVolumeSpecName: "kube-api-access-rbm4m") pod "e39708f9-5d2d-4ed5-9243-7b71ef470ca7" (UID: "e39708f9-5d2d-4ed5-9243-7b71ef470ca7"). InnerVolumeSpecName "kube-api-access-rbm4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.600057 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "e39708f9-5d2d-4ed5-9243-7b71ef470ca7" (UID: "e39708f9-5d2d-4ed5-9243-7b71ef470ca7"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.601619 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cafe1e82-ef19-4345-825e-cc9bf016b353-kube-api-access-sr45c" (OuterVolumeSpecName: "kube-api-access-sr45c") pod "cafe1e82-ef19-4345-825e-cc9bf016b353" (UID: "cafe1e82-ef19-4345-825e-cc9bf016b353"). InnerVolumeSpecName "kube-api-access-sr45c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.603728 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cafe1e82-ef19-4345-825e-cc9bf016b353-utilities" (OuterVolumeSpecName: "utilities") pod "cafe1e82-ef19-4345-825e-cc9bf016b353" (UID: "cafe1e82-ef19-4345-825e-cc9bf016b353"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.610340 4932 scope.go:117] "RemoveContainer" containerID="615e50229cae23ff110c15e4063527051bcf916c17f8ff5f0d5558ecb2cc2e13" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.660125 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chh8j" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.663283 4932 scope.go:117] "RemoveContainer" containerID="b1ab147c23c564a23da14f52dedbbfb0b71ab40cc857242937d70043e546697f" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.673429 4932 scope.go:117] "RemoveContainer" containerID="6498dec7f3004ba8f78c5dff3be4a4dafeba91d1de501891218f69f8d9282e26" Feb 18 19:39:59 crc kubenswrapper[4932]: E0218 19:39:59.673752 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6498dec7f3004ba8f78c5dff3be4a4dafeba91d1de501891218f69f8d9282e26\": container with ID starting with 6498dec7f3004ba8f78c5dff3be4a4dafeba91d1de501891218f69f8d9282e26 not found: ID does not exist" containerID="6498dec7f3004ba8f78c5dff3be4a4dafeba91d1de501891218f69f8d9282e26" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.673779 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6498dec7f3004ba8f78c5dff3be4a4dafeba91d1de501891218f69f8d9282e26"} err="failed to get container status \"6498dec7f3004ba8f78c5dff3be4a4dafeba91d1de501891218f69f8d9282e26\": rpc error: code = NotFound desc = could not find container \"6498dec7f3004ba8f78c5dff3be4a4dafeba91d1de501891218f69f8d9282e26\": container with ID starting with 6498dec7f3004ba8f78c5dff3be4a4dafeba91d1de501891218f69f8d9282e26 not found: ID does not exist" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.677363 4932 scope.go:117] "RemoveContainer" containerID="615e50229cae23ff110c15e4063527051bcf916c17f8ff5f0d5558ecb2cc2e13" Feb 18 19:39:59 crc kubenswrapper[4932]: E0218 19:39:59.678711 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"615e50229cae23ff110c15e4063527051bcf916c17f8ff5f0d5558ecb2cc2e13\": container with ID starting with 615e50229cae23ff110c15e4063527051bcf916c17f8ff5f0d5558ecb2cc2e13 not found: ID does not exist" containerID="615e50229cae23ff110c15e4063527051bcf916c17f8ff5f0d5558ecb2cc2e13" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.678741 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"615e50229cae23ff110c15e4063527051bcf916c17f8ff5f0d5558ecb2cc2e13"} err="failed to get container status \"615e50229cae23ff110c15e4063527051bcf916c17f8ff5f0d5558ecb2cc2e13\": rpc error: code = NotFound desc = could not find container \"615e50229cae23ff110c15e4063527051bcf916c17f8ff5f0d5558ecb2cc2e13\": container with ID starting with 615e50229cae23ff110c15e4063527051bcf916c17f8ff5f0d5558ecb2cc2e13 not found: ID does not exist" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.678760 4932 scope.go:117] "RemoveContainer" containerID="b1ab147c23c564a23da14f52dedbbfb0b71ab40cc857242937d70043e546697f" Feb 18 19:39:59 crc kubenswrapper[4932]: E0218 19:39:59.679878 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1ab147c23c564a23da14f52dedbbfb0b71ab40cc857242937d70043e546697f\": container with ID starting with b1ab147c23c564a23da14f52dedbbfb0b71ab40cc857242937d70043e546697f not found: ID does not exist" containerID="b1ab147c23c564a23da14f52dedbbfb0b71ab40cc857242937d70043e546697f" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.679900 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1ab147c23c564a23da14f52dedbbfb0b71ab40cc857242937d70043e546697f"} err="failed to get container status \"b1ab147c23c564a23da14f52dedbbfb0b71ab40cc857242937d70043e546697f\": rpc error: code = NotFound desc = could not find container \"b1ab147c23c564a23da14f52dedbbfb0b71ab40cc857242937d70043e546697f\": container with ID starting with b1ab147c23c564a23da14f52dedbbfb0b71ab40cc857242937d70043e546697f not found: ID does not exist" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.681587 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cafe1e82-ef19-4345-825e-cc9bf016b353-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cafe1e82-ef19-4345-825e-cc9bf016b353" (UID: "cafe1e82-ef19-4345-825e-cc9bf016b353"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.699356 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cafe1e82-ef19-4345-825e-cc9bf016b353-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.699385 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cafe1e82-ef19-4345-825e-cc9bf016b353-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.699395 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbm4m\" (UniqueName: \"kubernetes.io/projected/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-kube-api-access-rbm4m\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.699408 4932 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.699417 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sr45c\" (UniqueName: \"kubernetes.io/projected/cafe1e82-ef19-4345-825e-cc9bf016b353-kube-api-access-sr45c\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.699427 4932 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e39708f9-5d2d-4ed5-9243-7b71ef470ca7-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.710591 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j2xgw" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.719434 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67db6f585-dp67c"] Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.719872 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" podUID="93e7ae54-b8ce-4890-9901-514d2f4b7f0a" containerName="controller-manager" containerID="cri-o://250d8c6fe62715b714c699df51552ef5ce43496f68a287cae2c7849c5452f06b" gracePeriod=30 Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.751925 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4w2tj" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.800064 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce921030-ec82-420d-a9e7-cd04ee7e055b-utilities\") pod \"ce921030-ec82-420d-a9e7-cd04ee7e055b\" (UID: \"ce921030-ec82-420d-a9e7-cd04ee7e055b\") " Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.800135 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce921030-ec82-420d-a9e7-cd04ee7e055b-catalog-content\") pod \"ce921030-ec82-420d-a9e7-cd04ee7e055b\" (UID: \"ce921030-ec82-420d-a9e7-cd04ee7e055b\") " Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.800195 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fc64x\" (UniqueName: \"kubernetes.io/projected/ce921030-ec82-420d-a9e7-cd04ee7e055b-kube-api-access-fc64x\") pod \"ce921030-ec82-420d-a9e7-cd04ee7e055b\" (UID: \"ce921030-ec82-420d-a9e7-cd04ee7e055b\") " Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.800225 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62bbf001-ce57-471f-ad28-1d892d0d30e9-catalog-content\") pod \"62bbf001-ce57-471f-ad28-1d892d0d30e9\" (UID: \"62bbf001-ce57-471f-ad28-1d892d0d30e9\") " Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.800593 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgm8v\" (UniqueName: \"kubernetes.io/projected/62bbf001-ce57-471f-ad28-1d892d0d30e9-kube-api-access-rgm8v\") pod \"62bbf001-ce57-471f-ad28-1d892d0d30e9\" (UID: \"62bbf001-ce57-471f-ad28-1d892d0d30e9\") " Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.800661 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62bbf001-ce57-471f-ad28-1d892d0d30e9-utilities\") pod \"62bbf001-ce57-471f-ad28-1d892d0d30e9\" (UID: \"62bbf001-ce57-471f-ad28-1d892d0d30e9\") " Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.800781 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce921030-ec82-420d-a9e7-cd04ee7e055b-utilities" (OuterVolumeSpecName: "utilities") pod "ce921030-ec82-420d-a9e7-cd04ee7e055b" (UID: "ce921030-ec82-420d-a9e7-cd04ee7e055b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.801639 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce921030-ec82-420d-a9e7-cd04ee7e055b-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.802229 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62bbf001-ce57-471f-ad28-1d892d0d30e9-utilities" (OuterVolumeSpecName: "utilities") pod "62bbf001-ce57-471f-ad28-1d892d0d30e9" (UID: "62bbf001-ce57-471f-ad28-1d892d0d30e9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.805138 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce921030-ec82-420d-a9e7-cd04ee7e055b-kube-api-access-fc64x" (OuterVolumeSpecName: "kube-api-access-fc64x") pod "ce921030-ec82-420d-a9e7-cd04ee7e055b" (UID: "ce921030-ec82-420d-a9e7-cd04ee7e055b"). InnerVolumeSpecName "kube-api-access-fc64x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.805453 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62bbf001-ce57-471f-ad28-1d892d0d30e9-kube-api-access-rgm8v" (OuterVolumeSpecName: "kube-api-access-rgm8v") pod "62bbf001-ce57-471f-ad28-1d892d0d30e9" (UID: "62bbf001-ce57-471f-ad28-1d892d0d30e9"). InnerVolumeSpecName "kube-api-access-rgm8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.840418 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5c79p"] Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.847975 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5c79p"] Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.860617 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qvwc8"] Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.861295 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62bbf001-ce57-471f-ad28-1d892d0d30e9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "62bbf001-ce57-471f-ad28-1d892d0d30e9" (UID: "62bbf001-ce57-471f-ad28-1d892d0d30e9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.865488 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qvwc8"] Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.903160 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lttp\" (UniqueName: \"kubernetes.io/projected/b77a623a-ff2e-45aa-9004-b211b0200a3f-kube-api-access-7lttp\") pod \"b77a623a-ff2e-45aa-9004-b211b0200a3f\" (UID: \"b77a623a-ff2e-45aa-9004-b211b0200a3f\") " Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.903448 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b77a623a-ff2e-45aa-9004-b211b0200a3f-catalog-content\") pod \"b77a623a-ff2e-45aa-9004-b211b0200a3f\" (UID: \"b77a623a-ff2e-45aa-9004-b211b0200a3f\") " Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.903529 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b77a623a-ff2e-45aa-9004-b211b0200a3f-utilities\") pod \"b77a623a-ff2e-45aa-9004-b211b0200a3f\" (UID: \"b77a623a-ff2e-45aa-9004-b211b0200a3f\") " Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.903722 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62bbf001-ce57-471f-ad28-1d892d0d30e9-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.903751 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fc64x\" (UniqueName: \"kubernetes.io/projected/ce921030-ec82-420d-a9e7-cd04ee7e055b-kube-api-access-fc64x\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.903760 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62bbf001-ce57-471f-ad28-1d892d0d30e9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.903783 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgm8v\" (UniqueName: \"kubernetes.io/projected/62bbf001-ce57-471f-ad28-1d892d0d30e9-kube-api-access-rgm8v\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.905441 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b77a623a-ff2e-45aa-9004-b211b0200a3f-utilities" (OuterVolumeSpecName: "utilities") pod "b77a623a-ff2e-45aa-9004-b211b0200a3f" (UID: "b77a623a-ff2e-45aa-9004-b211b0200a3f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.907700 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b77a623a-ff2e-45aa-9004-b211b0200a3f-kube-api-access-7lttp" (OuterVolumeSpecName: "kube-api-access-7lttp") pod "b77a623a-ff2e-45aa-9004-b211b0200a3f" (UID: "b77a623a-ff2e-45aa-9004-b211b0200a3f"). InnerVolumeSpecName "kube-api-access-7lttp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.937242 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b77a623a-ff2e-45aa-9004-b211b0200a3f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b77a623a-ff2e-45aa-9004-b211b0200a3f" (UID: "b77a623a-ff2e-45aa-9004-b211b0200a3f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.947883 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-44vtg"] Feb 18 19:39:59 crc kubenswrapper[4932]: I0218 19:39:59.948076 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce921030-ec82-420d-a9e7-cd04ee7e055b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce921030-ec82-420d-a9e7-cd04ee7e055b" (UID: "ce921030-ec82-420d-a9e7-cd04ee7e055b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4932]: W0218 19:39:59.965191 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58ed1571_b94a_4792_9c8f_ead2f0596e42.slice/crio-05111da017dff01613e4d083a44df3ba4c246bd1e9c23a6764b357c306e87978 WatchSource:0}: Error finding container 05111da017dff01613e4d083a44df3ba4c246bd1e9c23a6764b357c306e87978: Status 404 returned error can't find the container with id 05111da017dff01613e4d083a44df3ba4c246bd1e9c23a6764b357c306e87978 Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.004874 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lttp\" (UniqueName: \"kubernetes.io/projected/b77a623a-ff2e-45aa-9004-b211b0200a3f-kube-api-access-7lttp\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.004900 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b77a623a-ff2e-45aa-9004-b211b0200a3f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.004909 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce921030-ec82-420d-a9e7-cd04ee7e055b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.004917 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b77a623a-ff2e-45aa-9004-b211b0200a3f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.054019 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.206722 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-client-ca\") pod \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.206839 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-serving-cert\") pod \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.206909 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5pz5\" (UniqueName: \"kubernetes.io/projected/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-kube-api-access-c5pz5\") pod \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.206929 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-config\") pod \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.206970 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-proxy-ca-bundles\") pod \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\" (UID: \"93e7ae54-b8ce-4890-9901-514d2f4b7f0a\") " Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.207673 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-client-ca" (OuterVolumeSpecName: "client-ca") pod "93e7ae54-b8ce-4890-9901-514d2f4b7f0a" (UID: "93e7ae54-b8ce-4890-9901-514d2f4b7f0a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.208007 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-config" (OuterVolumeSpecName: "config") pod "93e7ae54-b8ce-4890-9901-514d2f4b7f0a" (UID: "93e7ae54-b8ce-4890-9901-514d2f4b7f0a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.208040 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "93e7ae54-b8ce-4890-9901-514d2f4b7f0a" (UID: "93e7ae54-b8ce-4890-9901-514d2f4b7f0a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.210813 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-kube-api-access-c5pz5" (OuterVolumeSpecName: "kube-api-access-c5pz5") pod "93e7ae54-b8ce-4890-9901-514d2f4b7f0a" (UID: "93e7ae54-b8ce-4890-9901-514d2f4b7f0a"). InnerVolumeSpecName "kube-api-access-c5pz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.211928 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "93e7ae54-b8ce-4890-9901-514d2f4b7f0a" (UID: "93e7ae54-b8ce-4890-9901-514d2f4b7f0a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.308478 4932 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.308530 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5pz5\" (UniqueName: \"kubernetes.io/projected/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-kube-api-access-c5pz5\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.308548 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.308562 4932 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.308576 4932 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93e7ae54-b8ce-4890-9901-514d2f4b7f0a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.552128 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4w2tj" event={"ID":"b77a623a-ff2e-45aa-9004-b211b0200a3f","Type":"ContainerDied","Data":"79bf00f2e14eaea6ac861e5d5414045b4e7af7c9494be58a0ddf97f7bbd0066e"} Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.552158 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4w2tj" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.552222 4932 scope.go:117] "RemoveContainer" containerID="8629bd2837aebb06f17bda76bfe6b4989212f8b67eec3674f76174649de59a2e" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.557159 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chh8j" event={"ID":"ce921030-ec82-420d-a9e7-cd04ee7e055b","Type":"ContainerDied","Data":"df158c2125177f92039a79a6401f4bb6f7b2c14373fe74c537b86d94e6f1ab0e"} Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.557233 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chh8j" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.558320 4932 generic.go:334] "Generic (PLEG): container finished" podID="93e7ae54-b8ce-4890-9901-514d2f4b7f0a" containerID="250d8c6fe62715b714c699df51552ef5ce43496f68a287cae2c7849c5452f06b" exitCode=0 Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.558387 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" event={"ID":"93e7ae54-b8ce-4890-9901-514d2f4b7f0a","Type":"ContainerDied","Data":"250d8c6fe62715b714c699df51552ef5ce43496f68a287cae2c7849c5452f06b"} Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.558408 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" event={"ID":"93e7ae54-b8ce-4890-9901-514d2f4b7f0a","Type":"ContainerDied","Data":"69e9e84f33df4568345d5c91cf10b5627bbd157605a893896cb062f70472ec3a"} Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.558460 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67db6f585-dp67c" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.567282 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2xgw" event={"ID":"62bbf001-ce57-471f-ad28-1d892d0d30e9","Type":"ContainerDied","Data":"598a3819cd069f787a558e804a3b29d8f39ee54c7fd7148d56ad085f056a9d34"} Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.567518 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j2xgw" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.569415 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-44vtg" event={"ID":"58ed1571-b94a-4792-9c8f-ead2f0596e42","Type":"ContainerStarted","Data":"705af7e82397d874c7abff7a640e68623d95a89fe326a1d8a328c9df6252c17d"} Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.569455 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-44vtg" event={"ID":"58ed1571-b94a-4792-9c8f-ead2f0596e42","Type":"ContainerStarted","Data":"05111da017dff01613e4d083a44df3ba4c246bd1e9c23a6764b357c306e87978"} Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.570036 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-44vtg" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.578162 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-44vtg" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.578278 4932 scope.go:117] "RemoveContainer" containerID="399844cbfb1eed438dbae81663b568d5834893c25f35e7193be65debdd42cfaa" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.596911 4932 scope.go:117] "RemoveContainer" containerID="a6fd3575dcddfe36fd8dfcc8e6bcb0f7035ca23b01b700d078f298418c1896e8" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.607368 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-44vtg" podStartSLOduration=1.606238883 podStartE2EDuration="1.606238883s" podCreationTimestamp="2026-02-18 19:39:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:40:00.603625729 +0000 UTC m=+364.185580594" watchObservedRunningTime="2026-02-18 19:40:00.606238883 +0000 UTC m=+364.188193728" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.628153 4932 scope.go:117] "RemoveContainer" containerID="e34e27d0659e0d99e6372515305dc5e1613a602751683fd615bb6bd8747d32f2" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.651237 4932 scope.go:117] "RemoveContainer" containerID="fb361fdaea379654dbc86cd68517d68e807abad8cc09c0668f73e69287045372" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.651458 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4w2tj"] Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.661351 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4w2tj"] Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.668221 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-chh8j"] Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.674413 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-chh8j"] Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.675957 4932 scope.go:117] "RemoveContainer" containerID="bda935338a806285152d3571a5562901d0dc27851a41082e686230cc48a54915" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.682043 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67db6f585-dp67c"] Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.684538 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-67db6f585-dp67c"] Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.691630 4932 scope.go:117] "RemoveContainer" containerID="250d8c6fe62715b714c699df51552ef5ce43496f68a287cae2c7849c5452f06b" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.693366 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j2xgw"] Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.696972 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j2xgw"] Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.707978 4932 scope.go:117] "RemoveContainer" containerID="250d8c6fe62715b714c699df51552ef5ce43496f68a287cae2c7849c5452f06b" Feb 18 19:40:00 crc kubenswrapper[4932]: E0218 19:40:00.708395 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"250d8c6fe62715b714c699df51552ef5ce43496f68a287cae2c7849c5452f06b\": container with ID starting with 250d8c6fe62715b714c699df51552ef5ce43496f68a287cae2c7849c5452f06b not found: ID does not exist" containerID="250d8c6fe62715b714c699df51552ef5ce43496f68a287cae2c7849c5452f06b" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.708426 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"250d8c6fe62715b714c699df51552ef5ce43496f68a287cae2c7849c5452f06b"} err="failed to get container status \"250d8c6fe62715b714c699df51552ef5ce43496f68a287cae2c7849c5452f06b\": rpc error: code = NotFound desc = could not find container \"250d8c6fe62715b714c699df51552ef5ce43496f68a287cae2c7849c5452f06b\": container with ID starting with 250d8c6fe62715b714c699df51552ef5ce43496f68a287cae2c7849c5452f06b not found: ID does not exist" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.708461 4932 scope.go:117] "RemoveContainer" containerID="13c758cf33ac2064fd2a2bac98c4ca52868f7188bbf8e3e8b926c0341705af4b" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.723397 4932 scope.go:117] "RemoveContainer" containerID="5fa3af86ad8e20edc339dfb0d7d75e1dba3410f262c6355782e4c035746708c1" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.744658 4932 scope.go:117] "RemoveContainer" containerID="6d3ff69895d4bcdcf15d410bfbcd335c0b79b07284d7d99d33d18f064ce3f033" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.757691 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8b5db5768-s6z9t"] Feb 18 19:40:00 crc kubenswrapper[4932]: E0218 19:40:00.757875 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cafe1e82-ef19-4345-825e-cc9bf016b353" containerName="extract-utilities" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.757891 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="cafe1e82-ef19-4345-825e-cc9bf016b353" containerName="extract-utilities" Feb 18 19:40:00 crc kubenswrapper[4932]: E0218 19:40:00.757901 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce921030-ec82-420d-a9e7-cd04ee7e055b" containerName="registry-server" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.757907 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce921030-ec82-420d-a9e7-cd04ee7e055b" containerName="registry-server" Feb 18 19:40:00 crc kubenswrapper[4932]: E0218 19:40:00.757917 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62bbf001-ce57-471f-ad28-1d892d0d30e9" containerName="extract-utilities" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.757923 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="62bbf001-ce57-471f-ad28-1d892d0d30e9" containerName="extract-utilities" Feb 18 19:40:00 crc kubenswrapper[4932]: E0218 19:40:00.757932 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cafe1e82-ef19-4345-825e-cc9bf016b353" containerName="registry-server" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.757938 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="cafe1e82-ef19-4345-825e-cc9bf016b353" containerName="registry-server" Feb 18 19:40:00 crc kubenswrapper[4932]: E0218 19:40:00.757947 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce921030-ec82-420d-a9e7-cd04ee7e055b" containerName="extract-content" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.757952 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce921030-ec82-420d-a9e7-cd04ee7e055b" containerName="extract-content" Feb 18 19:40:00 crc kubenswrapper[4932]: E0218 19:40:00.757962 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce921030-ec82-420d-a9e7-cd04ee7e055b" containerName="extract-utilities" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.757967 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce921030-ec82-420d-a9e7-cd04ee7e055b" containerName="extract-utilities" Feb 18 19:40:00 crc kubenswrapper[4932]: E0218 19:40:00.757978 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cafe1e82-ef19-4345-825e-cc9bf016b353" containerName="extract-content" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.757985 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="cafe1e82-ef19-4345-825e-cc9bf016b353" containerName="extract-content" Feb 18 19:40:00 crc kubenswrapper[4932]: E0218 19:40:00.757993 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93e7ae54-b8ce-4890-9901-514d2f4b7f0a" containerName="controller-manager" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.757999 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="93e7ae54-b8ce-4890-9901-514d2f4b7f0a" containerName="controller-manager" Feb 18 19:40:00 crc kubenswrapper[4932]: E0218 19:40:00.758007 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b77a623a-ff2e-45aa-9004-b211b0200a3f" containerName="extract-utilities" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.758013 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="b77a623a-ff2e-45aa-9004-b211b0200a3f" containerName="extract-utilities" Feb 18 19:40:00 crc kubenswrapper[4932]: E0218 19:40:00.758022 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62bbf001-ce57-471f-ad28-1d892d0d30e9" containerName="extract-content" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.758027 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="62bbf001-ce57-471f-ad28-1d892d0d30e9" containerName="extract-content" Feb 18 19:40:00 crc kubenswrapper[4932]: E0218 19:40:00.758037 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39708f9-5d2d-4ed5-9243-7b71ef470ca7" containerName="marketplace-operator" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.758042 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39708f9-5d2d-4ed5-9243-7b71ef470ca7" containerName="marketplace-operator" Feb 18 19:40:00 crc kubenswrapper[4932]: E0218 19:40:00.758050 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62bbf001-ce57-471f-ad28-1d892d0d30e9" containerName="registry-server" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.758056 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="62bbf001-ce57-471f-ad28-1d892d0d30e9" containerName="registry-server" Feb 18 19:40:00 crc kubenswrapper[4932]: E0218 19:40:00.758062 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b77a623a-ff2e-45aa-9004-b211b0200a3f" containerName="registry-server" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.758067 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="b77a623a-ff2e-45aa-9004-b211b0200a3f" containerName="registry-server" Feb 18 19:40:00 crc kubenswrapper[4932]: E0218 19:40:00.758074 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39708f9-5d2d-4ed5-9243-7b71ef470ca7" containerName="marketplace-operator" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.758080 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39708f9-5d2d-4ed5-9243-7b71ef470ca7" containerName="marketplace-operator" Feb 18 19:40:00 crc kubenswrapper[4932]: E0218 19:40:00.758089 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b77a623a-ff2e-45aa-9004-b211b0200a3f" containerName="extract-content" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.758095 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="b77a623a-ff2e-45aa-9004-b211b0200a3f" containerName="extract-content" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.758168 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="e39708f9-5d2d-4ed5-9243-7b71ef470ca7" containerName="marketplace-operator" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.758190 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="62bbf001-ce57-471f-ad28-1d892d0d30e9" containerName="registry-server" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.758201 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="e39708f9-5d2d-4ed5-9243-7b71ef470ca7" containerName="marketplace-operator" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.758211 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="cafe1e82-ef19-4345-825e-cc9bf016b353" containerName="registry-server" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.758218 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce921030-ec82-420d-a9e7-cd04ee7e055b" containerName="registry-server" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.758226 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="93e7ae54-b8ce-4890-9901-514d2f4b7f0a" containerName="controller-manager" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.758234 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="b77a623a-ff2e-45aa-9004-b211b0200a3f" containerName="registry-server" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.758554 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.760058 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.762510 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.762825 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.764700 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.765146 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.765874 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.768014 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.772095 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8b5db5768-s6z9t"] Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.917471 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89d96beb-ea1c-44c9-8959-625e6dd22b23-config\") pod \"controller-manager-8b5db5768-s6z9t\" (UID: \"89d96beb-ea1c-44c9-8959-625e6dd22b23\") " pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.917522 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89d96beb-ea1c-44c9-8959-625e6dd22b23-serving-cert\") pod \"controller-manager-8b5db5768-s6z9t\" (UID: \"89d96beb-ea1c-44c9-8959-625e6dd22b23\") " pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.917574 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnpsl\" (UniqueName: \"kubernetes.io/projected/89d96beb-ea1c-44c9-8959-625e6dd22b23-kube-api-access-mnpsl\") pod \"controller-manager-8b5db5768-s6z9t\" (UID: \"89d96beb-ea1c-44c9-8959-625e6dd22b23\") " pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.917616 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89d96beb-ea1c-44c9-8959-625e6dd22b23-proxy-ca-bundles\") pod \"controller-manager-8b5db5768-s6z9t\" (UID: \"89d96beb-ea1c-44c9-8959-625e6dd22b23\") " pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:00 crc kubenswrapper[4932]: I0218 19:40:00.917634 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89d96beb-ea1c-44c9-8959-625e6dd22b23-client-ca\") pod \"controller-manager-8b5db5768-s6z9t\" (UID: \"89d96beb-ea1c-44c9-8959-625e6dd22b23\") " pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.019036 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89d96beb-ea1c-44c9-8959-625e6dd22b23-config\") pod \"controller-manager-8b5db5768-s6z9t\" (UID: \"89d96beb-ea1c-44c9-8959-625e6dd22b23\") " pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.019135 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89d96beb-ea1c-44c9-8959-625e6dd22b23-serving-cert\") pod \"controller-manager-8b5db5768-s6z9t\" (UID: \"89d96beb-ea1c-44c9-8959-625e6dd22b23\") " pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.019254 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnpsl\" (UniqueName: \"kubernetes.io/projected/89d96beb-ea1c-44c9-8959-625e6dd22b23-kube-api-access-mnpsl\") pod \"controller-manager-8b5db5768-s6z9t\" (UID: \"89d96beb-ea1c-44c9-8959-625e6dd22b23\") " pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.019300 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89d96beb-ea1c-44c9-8959-625e6dd22b23-proxy-ca-bundles\") pod \"controller-manager-8b5db5768-s6z9t\" (UID: \"89d96beb-ea1c-44c9-8959-625e6dd22b23\") " pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.019331 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89d96beb-ea1c-44c9-8959-625e6dd22b23-client-ca\") pod \"controller-manager-8b5db5768-s6z9t\" (UID: \"89d96beb-ea1c-44c9-8959-625e6dd22b23\") " pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.021207 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89d96beb-ea1c-44c9-8959-625e6dd22b23-client-ca\") pod \"controller-manager-8b5db5768-s6z9t\" (UID: \"89d96beb-ea1c-44c9-8959-625e6dd22b23\") " pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.021745 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89d96beb-ea1c-44c9-8959-625e6dd22b23-config\") pod \"controller-manager-8b5db5768-s6z9t\" (UID: \"89d96beb-ea1c-44c9-8959-625e6dd22b23\") " pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.022268 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89d96beb-ea1c-44c9-8959-625e6dd22b23-proxy-ca-bundles\") pod \"controller-manager-8b5db5768-s6z9t\" (UID: \"89d96beb-ea1c-44c9-8959-625e6dd22b23\") " pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.025801 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89d96beb-ea1c-44c9-8959-625e6dd22b23-serving-cert\") pod \"controller-manager-8b5db5768-s6z9t\" (UID: \"89d96beb-ea1c-44c9-8959-625e6dd22b23\") " pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.052899 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnpsl\" (UniqueName: \"kubernetes.io/projected/89d96beb-ea1c-44c9-8959-625e6dd22b23-kube-api-access-mnpsl\") pod \"controller-manager-8b5db5768-s6z9t\" (UID: \"89d96beb-ea1c-44c9-8959-625e6dd22b23\") " pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.071694 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.199814 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62bbf001-ce57-471f-ad28-1d892d0d30e9" path="/var/lib/kubelet/pods/62bbf001-ce57-471f-ad28-1d892d0d30e9/volumes" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.203045 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93e7ae54-b8ce-4890-9901-514d2f4b7f0a" path="/var/lib/kubelet/pods/93e7ae54-b8ce-4890-9901-514d2f4b7f0a/volumes" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.203844 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b77a623a-ff2e-45aa-9004-b211b0200a3f" path="/var/lib/kubelet/pods/b77a623a-ff2e-45aa-9004-b211b0200a3f/volumes" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.206095 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cafe1e82-ef19-4345-825e-cc9bf016b353" path="/var/lib/kubelet/pods/cafe1e82-ef19-4345-825e-cc9bf016b353/volumes" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.207641 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce921030-ec82-420d-a9e7-cd04ee7e055b" path="/var/lib/kubelet/pods/ce921030-ec82-420d-a9e7-cd04ee7e055b/volumes" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.209598 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e39708f9-5d2d-4ed5-9243-7b71ef470ca7" path="/var/lib/kubelet/pods/e39708f9-5d2d-4ed5-9243-7b71ef470ca7/volumes" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.327969 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8b5db5768-s6z9t"] Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.457460 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fbhgz"] Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.458551 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fbhgz" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.462595 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.467711 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fbhgz"] Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.526571 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82d8d8a1-602e-4738-8f7c-68d5d99c8a08-catalog-content\") pod \"redhat-marketplace-fbhgz\" (UID: \"82d8d8a1-602e-4738-8f7c-68d5d99c8a08\") " pod="openshift-marketplace/redhat-marketplace-fbhgz" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.526631 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwjtj\" (UniqueName: \"kubernetes.io/projected/82d8d8a1-602e-4738-8f7c-68d5d99c8a08-kube-api-access-nwjtj\") pod \"redhat-marketplace-fbhgz\" (UID: \"82d8d8a1-602e-4738-8f7c-68d5d99c8a08\") " pod="openshift-marketplace/redhat-marketplace-fbhgz" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.526665 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82d8d8a1-602e-4738-8f7c-68d5d99c8a08-utilities\") pod \"redhat-marketplace-fbhgz\" (UID: \"82d8d8a1-602e-4738-8f7c-68d5d99c8a08\") " pod="openshift-marketplace/redhat-marketplace-fbhgz" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.576356 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" event={"ID":"89d96beb-ea1c-44c9-8959-625e6dd22b23","Type":"ContainerStarted","Data":"52b1f47f7c74eda759908c839be252c964d6d2ae23011adc3820aad79511bb3b"} Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.576405 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" event={"ID":"89d96beb-ea1c-44c9-8959-625e6dd22b23","Type":"ContainerStarted","Data":"8f16b047ffa741d4d75ade3d9bd1041c252050c251d6eb5d34728bf951dc4f26"} Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.576602 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.579643 4932 patch_prober.go:28] interesting pod/controller-manager-8b5db5768-s6z9t container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" start-of-body= Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.579773 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" podUID="89d96beb-ea1c-44c9-8959-625e6dd22b23" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.592757 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" podStartSLOduration=2.59274073 podStartE2EDuration="2.59274073s" podCreationTimestamp="2026-02-18 19:39:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:40:01.589318196 +0000 UTC m=+365.171273101" watchObservedRunningTime="2026-02-18 19:40:01.59274073 +0000 UTC m=+365.174695575" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.628299 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82d8d8a1-602e-4738-8f7c-68d5d99c8a08-catalog-content\") pod \"redhat-marketplace-fbhgz\" (UID: \"82d8d8a1-602e-4738-8f7c-68d5d99c8a08\") " pod="openshift-marketplace/redhat-marketplace-fbhgz" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.628459 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwjtj\" (UniqueName: \"kubernetes.io/projected/82d8d8a1-602e-4738-8f7c-68d5d99c8a08-kube-api-access-nwjtj\") pod \"redhat-marketplace-fbhgz\" (UID: \"82d8d8a1-602e-4738-8f7c-68d5d99c8a08\") " pod="openshift-marketplace/redhat-marketplace-fbhgz" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.628546 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82d8d8a1-602e-4738-8f7c-68d5d99c8a08-utilities\") pod \"redhat-marketplace-fbhgz\" (UID: \"82d8d8a1-602e-4738-8f7c-68d5d99c8a08\") " pod="openshift-marketplace/redhat-marketplace-fbhgz" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.629156 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82d8d8a1-602e-4738-8f7c-68d5d99c8a08-catalog-content\") pod \"redhat-marketplace-fbhgz\" (UID: \"82d8d8a1-602e-4738-8f7c-68d5d99c8a08\") " pod="openshift-marketplace/redhat-marketplace-fbhgz" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.629307 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82d8d8a1-602e-4738-8f7c-68d5d99c8a08-utilities\") pod \"redhat-marketplace-fbhgz\" (UID: \"82d8d8a1-602e-4738-8f7c-68d5d99c8a08\") " pod="openshift-marketplace/redhat-marketplace-fbhgz" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.664116 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwjtj\" (UniqueName: \"kubernetes.io/projected/82d8d8a1-602e-4738-8f7c-68d5d99c8a08-kube-api-access-nwjtj\") pod \"redhat-marketplace-fbhgz\" (UID: \"82d8d8a1-602e-4738-8f7c-68d5d99c8a08\") " pod="openshift-marketplace/redhat-marketplace-fbhgz" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.692416 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mshwj"] Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.693806 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mshwj" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.696889 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.708799 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mshwj"] Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.783845 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fbhgz" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.832129 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d67ed032-a807-4d71-9580-3dee5922bc22-utilities\") pod \"certified-operators-mshwj\" (UID: \"d67ed032-a807-4d71-9580-3dee5922bc22\") " pod="openshift-marketplace/certified-operators-mshwj" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.832438 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d67ed032-a807-4d71-9580-3dee5922bc22-catalog-content\") pod \"certified-operators-mshwj\" (UID: \"d67ed032-a807-4d71-9580-3dee5922bc22\") " pod="openshift-marketplace/certified-operators-mshwj" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.832574 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bc29\" (UniqueName: \"kubernetes.io/projected/d67ed032-a807-4d71-9580-3dee5922bc22-kube-api-access-6bc29\") pod \"certified-operators-mshwj\" (UID: \"d67ed032-a807-4d71-9580-3dee5922bc22\") " pod="openshift-marketplace/certified-operators-mshwj" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.933859 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bc29\" (UniqueName: \"kubernetes.io/projected/d67ed032-a807-4d71-9580-3dee5922bc22-kube-api-access-6bc29\") pod \"certified-operators-mshwj\" (UID: \"d67ed032-a807-4d71-9580-3dee5922bc22\") " pod="openshift-marketplace/certified-operators-mshwj" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.933943 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d67ed032-a807-4d71-9580-3dee5922bc22-utilities\") pod \"certified-operators-mshwj\" (UID: \"d67ed032-a807-4d71-9580-3dee5922bc22\") " pod="openshift-marketplace/certified-operators-mshwj" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.933978 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d67ed032-a807-4d71-9580-3dee5922bc22-catalog-content\") pod \"certified-operators-mshwj\" (UID: \"d67ed032-a807-4d71-9580-3dee5922bc22\") " pod="openshift-marketplace/certified-operators-mshwj" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.934409 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d67ed032-a807-4d71-9580-3dee5922bc22-catalog-content\") pod \"certified-operators-mshwj\" (UID: \"d67ed032-a807-4d71-9580-3dee5922bc22\") " pod="openshift-marketplace/certified-operators-mshwj" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.934565 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d67ed032-a807-4d71-9580-3dee5922bc22-utilities\") pod \"certified-operators-mshwj\" (UID: \"d67ed032-a807-4d71-9580-3dee5922bc22\") " pod="openshift-marketplace/certified-operators-mshwj" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.951188 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bc29\" (UniqueName: \"kubernetes.io/projected/d67ed032-a807-4d71-9580-3dee5922bc22-kube-api-access-6bc29\") pod \"certified-operators-mshwj\" (UID: \"d67ed032-a807-4d71-9580-3dee5922bc22\") " pod="openshift-marketplace/certified-operators-mshwj" Feb 18 19:40:01 crc kubenswrapper[4932]: I0218 19:40:01.981394 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fbhgz"] Feb 18 19:40:01 crc kubenswrapper[4932]: W0218 19:40:01.988398 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82d8d8a1_602e_4738_8f7c_68d5d99c8a08.slice/crio-2f57323ca9956024d8ee49ffa6dafd2889ff940f5c1566bb8a0684bc3a233631 WatchSource:0}: Error finding container 2f57323ca9956024d8ee49ffa6dafd2889ff940f5c1566bb8a0684bc3a233631: Status 404 returned error can't find the container with id 2f57323ca9956024d8ee49ffa6dafd2889ff940f5c1566bb8a0684bc3a233631 Feb 18 19:40:02 crc kubenswrapper[4932]: I0218 19:40:02.006286 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mshwj" Feb 18 19:40:02 crc kubenswrapper[4932]: I0218 19:40:02.410948 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mshwj"] Feb 18 19:40:02 crc kubenswrapper[4932]: I0218 19:40:02.602456 4932 generic.go:334] "Generic (PLEG): container finished" podID="82d8d8a1-602e-4738-8f7c-68d5d99c8a08" containerID="d99d3b39f23c2b47699c019d0e906e7002f992cdeffa7929314656dae06f42c4" exitCode=0 Feb 18 19:40:02 crc kubenswrapper[4932]: I0218 19:40:02.602542 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbhgz" event={"ID":"82d8d8a1-602e-4738-8f7c-68d5d99c8a08","Type":"ContainerDied","Data":"d99d3b39f23c2b47699c019d0e906e7002f992cdeffa7929314656dae06f42c4"} Feb 18 19:40:02 crc kubenswrapper[4932]: I0218 19:40:02.602835 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbhgz" event={"ID":"82d8d8a1-602e-4738-8f7c-68d5d99c8a08","Type":"ContainerStarted","Data":"2f57323ca9956024d8ee49ffa6dafd2889ff940f5c1566bb8a0684bc3a233631"} Feb 18 19:40:02 crc kubenswrapper[4932]: I0218 19:40:02.604295 4932 generic.go:334] "Generic (PLEG): container finished" podID="d67ed032-a807-4d71-9580-3dee5922bc22" containerID="9e9d8a40b56a12c6453359140a2dee14ee9f02a8b7b7fce251d94bda397a7d95" exitCode=0 Feb 18 19:40:02 crc kubenswrapper[4932]: I0218 19:40:02.604973 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mshwj" event={"ID":"d67ed032-a807-4d71-9580-3dee5922bc22","Type":"ContainerDied","Data":"9e9d8a40b56a12c6453359140a2dee14ee9f02a8b7b7fce251d94bda397a7d95"} Feb 18 19:40:02 crc kubenswrapper[4932]: I0218 19:40:02.605004 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mshwj" event={"ID":"d67ed032-a807-4d71-9580-3dee5922bc22","Type":"ContainerStarted","Data":"9f4b7662a0b48cc385b312fbcc39ba68b72a4b2d48430de063a54454cf66fb83"} Feb 18 19:40:02 crc kubenswrapper[4932]: I0218 19:40:02.611947 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8b5db5768-s6z9t" Feb 18 19:40:03 crc kubenswrapper[4932]: I0218 19:40:03.849286 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6mzhg"] Feb 18 19:40:03 crc kubenswrapper[4932]: I0218 19:40:03.850583 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6mzhg" Feb 18 19:40:03 crc kubenswrapper[4932]: I0218 19:40:03.852385 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 19:40:03 crc kubenswrapper[4932]: I0218 19:40:03.859855 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6mzhg"] Feb 18 19:40:03 crc kubenswrapper[4932]: I0218 19:40:03.961087 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5z94\" (UniqueName: \"kubernetes.io/projected/f3054517-1735-4758-9f31-1bea7ef3a90f-kube-api-access-d5z94\") pod \"redhat-operators-6mzhg\" (UID: \"f3054517-1735-4758-9f31-1bea7ef3a90f\") " pod="openshift-marketplace/redhat-operators-6mzhg" Feb 18 19:40:03 crc kubenswrapper[4932]: I0218 19:40:03.961127 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3054517-1735-4758-9f31-1bea7ef3a90f-utilities\") pod \"redhat-operators-6mzhg\" (UID: \"f3054517-1735-4758-9f31-1bea7ef3a90f\") " pod="openshift-marketplace/redhat-operators-6mzhg" Feb 18 19:40:03 crc kubenswrapper[4932]: I0218 19:40:03.961153 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3054517-1735-4758-9f31-1bea7ef3a90f-catalog-content\") pod \"redhat-operators-6mzhg\" (UID: \"f3054517-1735-4758-9f31-1bea7ef3a90f\") " pod="openshift-marketplace/redhat-operators-6mzhg" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.050009 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2xmq4"] Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.051084 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2xmq4" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.055340 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.061953 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2xmq4"] Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.062333 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5z94\" (UniqueName: \"kubernetes.io/projected/f3054517-1735-4758-9f31-1bea7ef3a90f-kube-api-access-d5z94\") pod \"redhat-operators-6mzhg\" (UID: \"f3054517-1735-4758-9f31-1bea7ef3a90f\") " pod="openshift-marketplace/redhat-operators-6mzhg" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.062420 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3054517-1735-4758-9f31-1bea7ef3a90f-utilities\") pod \"redhat-operators-6mzhg\" (UID: \"f3054517-1735-4758-9f31-1bea7ef3a90f\") " pod="openshift-marketplace/redhat-operators-6mzhg" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.062458 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3054517-1735-4758-9f31-1bea7ef3a90f-catalog-content\") pod \"redhat-operators-6mzhg\" (UID: \"f3054517-1735-4758-9f31-1bea7ef3a90f\") " pod="openshift-marketplace/redhat-operators-6mzhg" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.062842 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3054517-1735-4758-9f31-1bea7ef3a90f-catalog-content\") pod \"redhat-operators-6mzhg\" (UID: \"f3054517-1735-4758-9f31-1bea7ef3a90f\") " pod="openshift-marketplace/redhat-operators-6mzhg" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.062943 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3054517-1735-4758-9f31-1bea7ef3a90f-utilities\") pod \"redhat-operators-6mzhg\" (UID: \"f3054517-1735-4758-9f31-1bea7ef3a90f\") " pod="openshift-marketplace/redhat-operators-6mzhg" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.093354 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5z94\" (UniqueName: \"kubernetes.io/projected/f3054517-1735-4758-9f31-1bea7ef3a90f-kube-api-access-d5z94\") pod \"redhat-operators-6mzhg\" (UID: \"f3054517-1735-4758-9f31-1bea7ef3a90f\") " pod="openshift-marketplace/redhat-operators-6mzhg" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.163673 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrrjl\" (UniqueName: \"kubernetes.io/projected/456839f3-9db1-45f2-bef4-c2b272a0f390-kube-api-access-wrrjl\") pod \"community-operators-2xmq4\" (UID: \"456839f3-9db1-45f2-bef4-c2b272a0f390\") " pod="openshift-marketplace/community-operators-2xmq4" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.163823 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/456839f3-9db1-45f2-bef4-c2b272a0f390-utilities\") pod \"community-operators-2xmq4\" (UID: \"456839f3-9db1-45f2-bef4-c2b272a0f390\") " pod="openshift-marketplace/community-operators-2xmq4" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.163870 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/456839f3-9db1-45f2-bef4-c2b272a0f390-catalog-content\") pod \"community-operators-2xmq4\" (UID: \"456839f3-9db1-45f2-bef4-c2b272a0f390\") " pod="openshift-marketplace/community-operators-2xmq4" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.170400 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6mzhg" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.268087 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/456839f3-9db1-45f2-bef4-c2b272a0f390-catalog-content\") pod \"community-operators-2xmq4\" (UID: \"456839f3-9db1-45f2-bef4-c2b272a0f390\") " pod="openshift-marketplace/community-operators-2xmq4" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.268628 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrrjl\" (UniqueName: \"kubernetes.io/projected/456839f3-9db1-45f2-bef4-c2b272a0f390-kube-api-access-wrrjl\") pod \"community-operators-2xmq4\" (UID: \"456839f3-9db1-45f2-bef4-c2b272a0f390\") " pod="openshift-marketplace/community-operators-2xmq4" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.268740 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/456839f3-9db1-45f2-bef4-c2b272a0f390-utilities\") pod \"community-operators-2xmq4\" (UID: \"456839f3-9db1-45f2-bef4-c2b272a0f390\") " pod="openshift-marketplace/community-operators-2xmq4" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.270793 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/456839f3-9db1-45f2-bef4-c2b272a0f390-catalog-content\") pod \"community-operators-2xmq4\" (UID: \"456839f3-9db1-45f2-bef4-c2b272a0f390\") " pod="openshift-marketplace/community-operators-2xmq4" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.271318 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/456839f3-9db1-45f2-bef4-c2b272a0f390-utilities\") pod \"community-operators-2xmq4\" (UID: \"456839f3-9db1-45f2-bef4-c2b272a0f390\") " pod="openshift-marketplace/community-operators-2xmq4" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.292637 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrrjl\" (UniqueName: \"kubernetes.io/projected/456839f3-9db1-45f2-bef4-c2b272a0f390-kube-api-access-wrrjl\") pod \"community-operators-2xmq4\" (UID: \"456839f3-9db1-45f2-bef4-c2b272a0f390\") " pod="openshift-marketplace/community-operators-2xmq4" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.365980 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2xmq4" Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.575722 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6mzhg"] Feb 18 19:40:04 crc kubenswrapper[4932]: W0218 19:40:04.583916 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3054517_1735_4758_9f31_1bea7ef3a90f.slice/crio-99e9bfe583756fb3a29d9519993bd57ebb27c1770663002015a8c1f3ac9cf45a WatchSource:0}: Error finding container 99e9bfe583756fb3a29d9519993bd57ebb27c1770663002015a8c1f3ac9cf45a: Status 404 returned error can't find the container with id 99e9bfe583756fb3a29d9519993bd57ebb27c1770663002015a8c1f3ac9cf45a Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.619591 4932 generic.go:334] "Generic (PLEG): container finished" podID="82d8d8a1-602e-4738-8f7c-68d5d99c8a08" containerID="5fd5e5ff9555cd90d087a8173924811e40b8cc3af67f40693aca6cceca6c0c2f" exitCode=0 Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.620485 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbhgz" event={"ID":"82d8d8a1-602e-4738-8f7c-68d5d99c8a08","Type":"ContainerDied","Data":"5fd5e5ff9555cd90d087a8173924811e40b8cc3af67f40693aca6cceca6c0c2f"} Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.624493 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6mzhg" event={"ID":"f3054517-1735-4758-9f31-1bea7ef3a90f","Type":"ContainerStarted","Data":"99e9bfe583756fb3a29d9519993bd57ebb27c1770663002015a8c1f3ac9cf45a"} Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.630267 4932 generic.go:334] "Generic (PLEG): container finished" podID="d67ed032-a807-4d71-9580-3dee5922bc22" containerID="b13d4bf7f51c9adf718207ab5a7e0347aa4ebf14771a2bd87f6ecf36dd9bd765" exitCode=0 Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.630309 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mshwj" event={"ID":"d67ed032-a807-4d71-9580-3dee5922bc22","Type":"ContainerDied","Data":"b13d4bf7f51c9adf718207ab5a7e0347aa4ebf14771a2bd87f6ecf36dd9bd765"} Feb 18 19:40:04 crc kubenswrapper[4932]: I0218 19:40:04.773494 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2xmq4"] Feb 18 19:40:04 crc kubenswrapper[4932]: W0218 19:40:04.812608 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod456839f3_9db1_45f2_bef4_c2b272a0f390.slice/crio-00577a5880bf25c36419884f1850f5db71c4de538054f1d4831060ee48026961 WatchSource:0}: Error finding container 00577a5880bf25c36419884f1850f5db71c4de538054f1d4831060ee48026961: Status 404 returned error can't find the container with id 00577a5880bf25c36419884f1850f5db71c4de538054f1d4831060ee48026961 Feb 18 19:40:05 crc kubenswrapper[4932]: I0218 19:40:05.635734 4932 generic.go:334] "Generic (PLEG): container finished" podID="456839f3-9db1-45f2-bef4-c2b272a0f390" containerID="37ada88bd394eeea79fbc62cb96deb7d09f38d21554e575b9c618962e240315a" exitCode=0 Feb 18 19:40:05 crc kubenswrapper[4932]: I0218 19:40:05.635807 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2xmq4" event={"ID":"456839f3-9db1-45f2-bef4-c2b272a0f390","Type":"ContainerDied","Data":"37ada88bd394eeea79fbc62cb96deb7d09f38d21554e575b9c618962e240315a"} Feb 18 19:40:05 crc kubenswrapper[4932]: I0218 19:40:05.636048 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2xmq4" event={"ID":"456839f3-9db1-45f2-bef4-c2b272a0f390","Type":"ContainerStarted","Data":"00577a5880bf25c36419884f1850f5db71c4de538054f1d4831060ee48026961"} Feb 18 19:40:05 crc kubenswrapper[4932]: I0218 19:40:05.639154 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mshwj" event={"ID":"d67ed032-a807-4d71-9580-3dee5922bc22","Type":"ContainerStarted","Data":"3705d494829488e5cc341bb9c8716dc343e9837ebf78a1f2536bfda68d93fdf6"} Feb 18 19:40:05 crc kubenswrapper[4932]: I0218 19:40:05.641713 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbhgz" event={"ID":"82d8d8a1-602e-4738-8f7c-68d5d99c8a08","Type":"ContainerStarted","Data":"d243e23d240c6397ccb0eaab8e195190faa489f15ef43ff8e4f256611a078327"} Feb 18 19:40:05 crc kubenswrapper[4932]: I0218 19:40:05.642957 4932 generic.go:334] "Generic (PLEG): container finished" podID="f3054517-1735-4758-9f31-1bea7ef3a90f" containerID="7e8ec96a09d23aa647c947e47365d37a4b7d04937b42f92aa6a00e1d7757fdf2" exitCode=0 Feb 18 19:40:05 crc kubenswrapper[4932]: I0218 19:40:05.642983 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6mzhg" event={"ID":"f3054517-1735-4758-9f31-1bea7ef3a90f","Type":"ContainerDied","Data":"7e8ec96a09d23aa647c947e47365d37a4b7d04937b42f92aa6a00e1d7757fdf2"} Feb 18 19:40:05 crc kubenswrapper[4932]: I0218 19:40:05.702500 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mshwj" podStartSLOduration=2.208082834 podStartE2EDuration="4.702482299s" podCreationTimestamp="2026-02-18 19:40:01 +0000 UTC" firstStartedPulling="2026-02-18 19:40:02.605869466 +0000 UTC m=+366.187824331" lastFinishedPulling="2026-02-18 19:40:05.100268911 +0000 UTC m=+368.682223796" observedRunningTime="2026-02-18 19:40:05.679608472 +0000 UTC m=+369.261563317" watchObservedRunningTime="2026-02-18 19:40:05.702482299 +0000 UTC m=+369.284437144" Feb 18 19:40:05 crc kubenswrapper[4932]: I0218 19:40:05.725971 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fbhgz" podStartSLOduration=2.342453497 podStartE2EDuration="4.725953751s" podCreationTimestamp="2026-02-18 19:40:01 +0000 UTC" firstStartedPulling="2026-02-18 19:40:02.605414165 +0000 UTC m=+366.187369040" lastFinishedPulling="2026-02-18 19:40:04.988914449 +0000 UTC m=+368.570869294" observedRunningTime="2026-02-18 19:40:05.705720438 +0000 UTC m=+369.287675283" watchObservedRunningTime="2026-02-18 19:40:05.725953751 +0000 UTC m=+369.307908596" Feb 18 19:40:06 crc kubenswrapper[4932]: I0218 19:40:06.650999 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6mzhg" event={"ID":"f3054517-1735-4758-9f31-1bea7ef3a90f","Type":"ContainerStarted","Data":"467703b6550e8102f08da900fae7a0802af8d985d81cea88089cddb03366e6e3"} Feb 18 19:40:06 crc kubenswrapper[4932]: I0218 19:40:06.653918 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2xmq4" event={"ID":"456839f3-9db1-45f2-bef4-c2b272a0f390","Type":"ContainerStarted","Data":"a66d421c1b72eb5fbbc3d3c93de3b528bb6aad0daaa1b161e3caa9ec9473bb5b"} Feb 18 19:40:07 crc kubenswrapper[4932]: I0218 19:40:07.665255 4932 generic.go:334] "Generic (PLEG): container finished" podID="f3054517-1735-4758-9f31-1bea7ef3a90f" containerID="467703b6550e8102f08da900fae7a0802af8d985d81cea88089cddb03366e6e3" exitCode=0 Feb 18 19:40:07 crc kubenswrapper[4932]: I0218 19:40:07.665401 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6mzhg" event={"ID":"f3054517-1735-4758-9f31-1bea7ef3a90f","Type":"ContainerDied","Data":"467703b6550e8102f08da900fae7a0802af8d985d81cea88089cddb03366e6e3"} Feb 18 19:40:07 crc kubenswrapper[4932]: I0218 19:40:07.671600 4932 generic.go:334] "Generic (PLEG): container finished" podID="456839f3-9db1-45f2-bef4-c2b272a0f390" containerID="a66d421c1b72eb5fbbc3d3c93de3b528bb6aad0daaa1b161e3caa9ec9473bb5b" exitCode=0 Feb 18 19:40:07 crc kubenswrapper[4932]: I0218 19:40:07.671635 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2xmq4" event={"ID":"456839f3-9db1-45f2-bef4-c2b272a0f390","Type":"ContainerDied","Data":"a66d421c1b72eb5fbbc3d3c93de3b528bb6aad0daaa1b161e3caa9ec9473bb5b"} Feb 18 19:40:08 crc kubenswrapper[4932]: I0218 19:40:08.681273 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2xmq4" event={"ID":"456839f3-9db1-45f2-bef4-c2b272a0f390","Type":"ContainerStarted","Data":"33334224c8879576f443f718e955b7a6f9f37dec1fa73436b9ccf6d9fa42099c"} Feb 18 19:40:08 crc kubenswrapper[4932]: I0218 19:40:08.685905 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6mzhg" event={"ID":"f3054517-1735-4758-9f31-1bea7ef3a90f","Type":"ContainerStarted","Data":"62954f9d0b85f235d9b60cd7e44f4be526d4c5e71b3667788c5ffe7f906673ad"} Feb 18 19:40:08 crc kubenswrapper[4932]: I0218 19:40:08.724534 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2xmq4" podStartSLOduration=2.290111821 podStartE2EDuration="4.724500035s" podCreationTimestamp="2026-02-18 19:40:04 +0000 UTC" firstStartedPulling="2026-02-18 19:40:05.637317562 +0000 UTC m=+369.219272427" lastFinishedPulling="2026-02-18 19:40:08.071705796 +0000 UTC m=+371.653660641" observedRunningTime="2026-02-18 19:40:08.704802255 +0000 UTC m=+372.286757130" watchObservedRunningTime="2026-02-18 19:40:08.724500035 +0000 UTC m=+372.306454900" Feb 18 19:40:08 crc kubenswrapper[4932]: I0218 19:40:08.727450 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6mzhg" podStartSLOduration=3.224022042 podStartE2EDuration="5.727432827s" podCreationTimestamp="2026-02-18 19:40:03 +0000 UTC" firstStartedPulling="2026-02-18 19:40:05.644231111 +0000 UTC m=+369.226185946" lastFinishedPulling="2026-02-18 19:40:08.147641886 +0000 UTC m=+371.729596731" observedRunningTime="2026-02-18 19:40:08.722745652 +0000 UTC m=+372.304700507" watchObservedRunningTime="2026-02-18 19:40:08.727432827 +0000 UTC m=+372.309387692" Feb 18 19:40:11 crc kubenswrapper[4932]: I0218 19:40:11.784987 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fbhgz" Feb 18 19:40:11 crc kubenswrapper[4932]: I0218 19:40:11.785375 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fbhgz" Feb 18 19:40:11 crc kubenswrapper[4932]: I0218 19:40:11.835807 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fbhgz" Feb 18 19:40:12 crc kubenswrapper[4932]: I0218 19:40:12.007460 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mshwj" Feb 18 19:40:12 crc kubenswrapper[4932]: I0218 19:40:12.008633 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mshwj" Feb 18 19:40:12 crc kubenswrapper[4932]: I0218 19:40:12.065078 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mshwj" Feb 18 19:40:12 crc kubenswrapper[4932]: I0218 19:40:12.755960 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fbhgz" Feb 18 19:40:12 crc kubenswrapper[4932]: I0218 19:40:12.760309 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mshwj" Feb 18 19:40:14 crc kubenswrapper[4932]: I0218 19:40:14.171247 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6mzhg" Feb 18 19:40:14 crc kubenswrapper[4932]: I0218 19:40:14.171529 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6mzhg" Feb 18 19:40:14 crc kubenswrapper[4932]: I0218 19:40:14.366404 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2xmq4" Feb 18 19:40:14 crc kubenswrapper[4932]: I0218 19:40:14.366467 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2xmq4" Feb 18 19:40:14 crc kubenswrapper[4932]: I0218 19:40:14.434046 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2xmq4" Feb 18 19:40:14 crc kubenswrapper[4932]: I0218 19:40:14.765345 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2xmq4" Feb 18 19:40:15 crc kubenswrapper[4932]: I0218 19:40:15.257094 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6mzhg" podUID="f3054517-1735-4758-9f31-1bea7ef3a90f" containerName="registry-server" probeResult="failure" output=< Feb 18 19:40:15 crc kubenswrapper[4932]: timeout: failed to connect service ":50051" within 1s Feb 18 19:40:15 crc kubenswrapper[4932]: > Feb 18 19:40:24 crc kubenswrapper[4932]: I0218 19:40:24.209012 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6mzhg" Feb 18 19:40:24 crc kubenswrapper[4932]: I0218 19:40:24.279551 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6mzhg" Feb 18 19:40:27 crc kubenswrapper[4932]: I0218 19:40:27.606236 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:40:27 crc kubenswrapper[4932]: I0218 19:40:27.606532 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:40:57 crc kubenswrapper[4932]: I0218 19:40:57.607089 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:40:57 crc kubenswrapper[4932]: I0218 19:40:57.608330 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:40:57 crc kubenswrapper[4932]: I0218 19:40:57.608414 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:40:57 crc kubenswrapper[4932]: I0218 19:40:57.610321 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1f6c0fd0c3107fc39e9f403b60bf7cadd547322feaa279357c61854210904894"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:40:57 crc kubenswrapper[4932]: I0218 19:40:57.610482 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://1f6c0fd0c3107fc39e9f403b60bf7cadd547322feaa279357c61854210904894" gracePeriod=600 Feb 18 19:40:57 crc kubenswrapper[4932]: I0218 19:40:57.993105 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="1f6c0fd0c3107fc39e9f403b60bf7cadd547322feaa279357c61854210904894" exitCode=0 Feb 18 19:40:57 crc kubenswrapper[4932]: I0218 19:40:57.993228 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"1f6c0fd0c3107fc39e9f403b60bf7cadd547322feaa279357c61854210904894"} Feb 18 19:40:57 crc kubenswrapper[4932]: I0218 19:40:57.994061 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"4ae81c5d1f59105a46f72ab1a12573d6a2070dbba30970140847e1ed2a0ce08d"} Feb 18 19:40:57 crc kubenswrapper[4932]: I0218 19:40:57.994133 4932 scope.go:117] "RemoveContainer" containerID="913fe54a562d7c76dc650eb60f897351a08738fed51eeaa1bff2d8a0d762c64e" Feb 18 19:42:57 crc kubenswrapper[4932]: I0218 19:42:57.606231 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:42:57 crc kubenswrapper[4932]: I0218 19:42:57.606930 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:43:27 crc kubenswrapper[4932]: I0218 19:43:27.606808 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:43:27 crc kubenswrapper[4932]: I0218 19:43:27.607511 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:43:57 crc kubenswrapper[4932]: I0218 19:43:57.605739 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:43:57 crc kubenswrapper[4932]: I0218 19:43:57.606402 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:43:57 crc kubenswrapper[4932]: I0218 19:43:57.606458 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:43:57 crc kubenswrapper[4932]: I0218 19:43:57.607227 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4ae81c5d1f59105a46f72ab1a12573d6a2070dbba30970140847e1ed2a0ce08d"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:43:57 crc kubenswrapper[4932]: I0218 19:43:57.607298 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://4ae81c5d1f59105a46f72ab1a12573d6a2070dbba30970140847e1ed2a0ce08d" gracePeriod=600 Feb 18 19:43:58 crc kubenswrapper[4932]: I0218 19:43:58.175055 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="4ae81c5d1f59105a46f72ab1a12573d6a2070dbba30970140847e1ed2a0ce08d" exitCode=0 Feb 18 19:43:58 crc kubenswrapper[4932]: I0218 19:43:58.175201 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"4ae81c5d1f59105a46f72ab1a12573d6a2070dbba30970140847e1ed2a0ce08d"} Feb 18 19:43:58 crc kubenswrapper[4932]: I0218 19:43:58.175460 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"f3b543e6ec63bdf78c858f95870e024438d65d986dd0f72b674fc74756af06be"} Feb 18 19:43:58 crc kubenswrapper[4932]: I0218 19:43:58.175497 4932 scope.go:117] "RemoveContainer" containerID="1f6c0fd0c3107fc39e9f403b60bf7cadd547322feaa279357c61854210904894" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.339442 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-2rnll"] Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.340558 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2rnll" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.345581 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.350595 4932 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-6jw5c" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.350648 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.355405 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-2rnll"] Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.367410 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-9cct4"] Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.368166 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-9cct4" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.368642 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-9cct4"] Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.373920 4932 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-65g9t" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.377699 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-cfzm7"] Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.378553 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-cfzm7" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.379934 4932 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-jrk65" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.399701 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-cfzm7"] Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.530393 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b8fr\" (UniqueName: \"kubernetes.io/projected/fdfef839-bac4-4bdb-bdec-7e5daff1d25a-kube-api-access-8b8fr\") pod \"cert-manager-webhook-687f57d79b-cfzm7\" (UID: \"fdfef839-bac4-4bdb-bdec-7e5daff1d25a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-cfzm7" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.530487 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwsw6\" (UniqueName: \"kubernetes.io/projected/e536e457-1629-4f37-a5dc-de0facb7639f-kube-api-access-fwsw6\") pod \"cert-manager-cainjector-cf98fcc89-2rnll\" (UID: \"e536e457-1629-4f37-a5dc-de0facb7639f\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-2rnll" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.530529 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftl8l\" (UniqueName: \"kubernetes.io/projected/f4644100-28b1-4203-bec6-a1c1605468eb-kube-api-access-ftl8l\") pod \"cert-manager-858654f9db-9cct4\" (UID: \"f4644100-28b1-4203-bec6-a1c1605468eb\") " pod="cert-manager/cert-manager-858654f9db-9cct4" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.631152 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftl8l\" (UniqueName: \"kubernetes.io/projected/f4644100-28b1-4203-bec6-a1c1605468eb-kube-api-access-ftl8l\") pod \"cert-manager-858654f9db-9cct4\" (UID: \"f4644100-28b1-4203-bec6-a1c1605468eb\") " pod="cert-manager/cert-manager-858654f9db-9cct4" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.631253 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b8fr\" (UniqueName: \"kubernetes.io/projected/fdfef839-bac4-4bdb-bdec-7e5daff1d25a-kube-api-access-8b8fr\") pod \"cert-manager-webhook-687f57d79b-cfzm7\" (UID: \"fdfef839-bac4-4bdb-bdec-7e5daff1d25a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-cfzm7" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.631324 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwsw6\" (UniqueName: \"kubernetes.io/projected/e536e457-1629-4f37-a5dc-de0facb7639f-kube-api-access-fwsw6\") pod \"cert-manager-cainjector-cf98fcc89-2rnll\" (UID: \"e536e457-1629-4f37-a5dc-de0facb7639f\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-2rnll" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.650493 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b8fr\" (UniqueName: \"kubernetes.io/projected/fdfef839-bac4-4bdb-bdec-7e5daff1d25a-kube-api-access-8b8fr\") pod \"cert-manager-webhook-687f57d79b-cfzm7\" (UID: \"fdfef839-bac4-4bdb-bdec-7e5daff1d25a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-cfzm7" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.651924 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftl8l\" (UniqueName: \"kubernetes.io/projected/f4644100-28b1-4203-bec6-a1c1605468eb-kube-api-access-ftl8l\") pod \"cert-manager-858654f9db-9cct4\" (UID: \"f4644100-28b1-4203-bec6-a1c1605468eb\") " pod="cert-manager/cert-manager-858654f9db-9cct4" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.654831 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwsw6\" (UniqueName: \"kubernetes.io/projected/e536e457-1629-4f37-a5dc-de0facb7639f-kube-api-access-fwsw6\") pod \"cert-manager-cainjector-cf98fcc89-2rnll\" (UID: \"e536e457-1629-4f37-a5dc-de0facb7639f\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-2rnll" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.662680 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2rnll" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.688199 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-9cct4" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.697851 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-cfzm7" Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.930280 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-9cct4"] Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.936328 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 19:44:56 crc kubenswrapper[4932]: I0218 19:44:56.976026 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-cfzm7"] Feb 18 19:44:56 crc kubenswrapper[4932]: W0218 19:44:56.980360 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdfef839_bac4_4bdb_bdec_7e5daff1d25a.slice/crio-584ada05cd83c8ea8236a8ae5627b0df5f1bbf9f4fc59cd204f1a9c2e2f00921 WatchSource:0}: Error finding container 584ada05cd83c8ea8236a8ae5627b0df5f1bbf9f4fc59cd204f1a9c2e2f00921: Status 404 returned error can't find the container with id 584ada05cd83c8ea8236a8ae5627b0df5f1bbf9f4fc59cd204f1a9c2e2f00921 Feb 18 19:44:57 crc kubenswrapper[4932]: I0218 19:44:57.095454 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-2rnll"] Feb 18 19:44:57 crc kubenswrapper[4932]: W0218 19:44:57.102160 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode536e457_1629_4f37_a5dc_de0facb7639f.slice/crio-6f7b4b218d34cda877b5b91da187d549579e73229230712e274e2d5c8a97d765 WatchSource:0}: Error finding container 6f7b4b218d34cda877b5b91da187d549579e73229230712e274e2d5c8a97d765: Status 404 returned error can't find the container with id 6f7b4b218d34cda877b5b91da187d549579e73229230712e274e2d5c8a97d765 Feb 18 19:44:57 crc kubenswrapper[4932]: I0218 19:44:57.537417 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-cfzm7" event={"ID":"fdfef839-bac4-4bdb-bdec-7e5daff1d25a","Type":"ContainerStarted","Data":"584ada05cd83c8ea8236a8ae5627b0df5f1bbf9f4fc59cd204f1a9c2e2f00921"} Feb 18 19:44:57 crc kubenswrapper[4932]: I0218 19:44:57.538438 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-9cct4" event={"ID":"f4644100-28b1-4203-bec6-a1c1605468eb","Type":"ContainerStarted","Data":"5ececa4df3d4663f238180c59c0fe70826463a7fdecfc4b797d81c2fcc339ca5"} Feb 18 19:44:57 crc kubenswrapper[4932]: I0218 19:44:57.539257 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2rnll" event={"ID":"e536e457-1629-4f37-a5dc-de0facb7639f","Type":"ContainerStarted","Data":"6f7b4b218d34cda877b5b91da187d549579e73229230712e274e2d5c8a97d765"} Feb 18 19:45:00 crc kubenswrapper[4932]: I0218 19:45:00.163959 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz"] Feb 18 19:45:00 crc kubenswrapper[4932]: I0218 19:45:00.165034 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz" Feb 18 19:45:00 crc kubenswrapper[4932]: I0218 19:45:00.167521 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 19:45:00 crc kubenswrapper[4932]: I0218 19:45:00.167721 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 19:45:00 crc kubenswrapper[4932]: I0218 19:45:00.176626 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz"] Feb 18 19:45:00 crc kubenswrapper[4932]: I0218 19:45:00.182264 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84719922-9618-4293-8f4a-fb525f37eca6-config-volume\") pod \"collect-profiles-29524065-tcbfz\" (UID: \"84719922-9618-4293-8f4a-fb525f37eca6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz" Feb 18 19:45:00 crc kubenswrapper[4932]: I0218 19:45:00.182410 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84719922-9618-4293-8f4a-fb525f37eca6-secret-volume\") pod \"collect-profiles-29524065-tcbfz\" (UID: \"84719922-9618-4293-8f4a-fb525f37eca6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz" Feb 18 19:45:00 crc kubenswrapper[4932]: I0218 19:45:00.182469 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfnzf\" (UniqueName: \"kubernetes.io/projected/84719922-9618-4293-8f4a-fb525f37eca6-kube-api-access-cfnzf\") pod \"collect-profiles-29524065-tcbfz\" (UID: \"84719922-9618-4293-8f4a-fb525f37eca6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz" Feb 18 19:45:00 crc kubenswrapper[4932]: I0218 19:45:00.284599 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84719922-9618-4293-8f4a-fb525f37eca6-config-volume\") pod \"collect-profiles-29524065-tcbfz\" (UID: \"84719922-9618-4293-8f4a-fb525f37eca6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz" Feb 18 19:45:00 crc kubenswrapper[4932]: I0218 19:45:00.284665 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84719922-9618-4293-8f4a-fb525f37eca6-secret-volume\") pod \"collect-profiles-29524065-tcbfz\" (UID: \"84719922-9618-4293-8f4a-fb525f37eca6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz" Feb 18 19:45:00 crc kubenswrapper[4932]: I0218 19:45:00.284688 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfnzf\" (UniqueName: \"kubernetes.io/projected/84719922-9618-4293-8f4a-fb525f37eca6-kube-api-access-cfnzf\") pod \"collect-profiles-29524065-tcbfz\" (UID: \"84719922-9618-4293-8f4a-fb525f37eca6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz" Feb 18 19:45:00 crc kubenswrapper[4932]: I0218 19:45:00.299715 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84719922-9618-4293-8f4a-fb525f37eca6-config-volume\") pod \"collect-profiles-29524065-tcbfz\" (UID: \"84719922-9618-4293-8f4a-fb525f37eca6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz" Feb 18 19:45:00 crc kubenswrapper[4932]: I0218 19:45:00.306483 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84719922-9618-4293-8f4a-fb525f37eca6-secret-volume\") pod \"collect-profiles-29524065-tcbfz\" (UID: \"84719922-9618-4293-8f4a-fb525f37eca6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz" Feb 18 19:45:00 crc kubenswrapper[4932]: I0218 19:45:00.320581 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfnzf\" (UniqueName: \"kubernetes.io/projected/84719922-9618-4293-8f4a-fb525f37eca6-kube-api-access-cfnzf\") pod \"collect-profiles-29524065-tcbfz\" (UID: \"84719922-9618-4293-8f4a-fb525f37eca6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz" Feb 18 19:45:00 crc kubenswrapper[4932]: I0218 19:45:00.492538 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz" Feb 18 19:45:01 crc kubenswrapper[4932]: I0218 19:45:01.193912 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz"] Feb 18 19:45:01 crc kubenswrapper[4932]: W0218 19:45:01.200949 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84719922_9618_4293_8f4a_fb525f37eca6.slice/crio-a253c63869ae1908974f0d3775039f12898abfd45ec3648d665b720963bf391a WatchSource:0}: Error finding container a253c63869ae1908974f0d3775039f12898abfd45ec3648d665b720963bf391a: Status 404 returned error can't find the container with id a253c63869ae1908974f0d3775039f12898abfd45ec3648d665b720963bf391a Feb 18 19:45:01 crc kubenswrapper[4932]: I0218 19:45:01.567269 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2rnll" event={"ID":"e536e457-1629-4f37-a5dc-de0facb7639f","Type":"ContainerStarted","Data":"a542a8caa59a6958ad3c9f7e345775e7e2bfdaf60a51c058177c52b819823e86"} Feb 18 19:45:01 crc kubenswrapper[4932]: I0218 19:45:01.568389 4932 generic.go:334] "Generic (PLEG): container finished" podID="84719922-9618-4293-8f4a-fb525f37eca6" containerID="80752bb80b5cb6dad23a49c747590ff84b2c23ef678e45c05c4cf091b2c9b0a9" exitCode=0 Feb 18 19:45:01 crc kubenswrapper[4932]: I0218 19:45:01.568477 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz" event={"ID":"84719922-9618-4293-8f4a-fb525f37eca6","Type":"ContainerDied","Data":"80752bb80b5cb6dad23a49c747590ff84b2c23ef678e45c05c4cf091b2c9b0a9"} Feb 18 19:45:01 crc kubenswrapper[4932]: I0218 19:45:01.568554 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz" event={"ID":"84719922-9618-4293-8f4a-fb525f37eca6","Type":"ContainerStarted","Data":"a253c63869ae1908974f0d3775039f12898abfd45ec3648d665b720963bf391a"} Feb 18 19:45:01 crc kubenswrapper[4932]: I0218 19:45:01.569392 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-cfzm7" event={"ID":"fdfef839-bac4-4bdb-bdec-7e5daff1d25a","Type":"ContainerStarted","Data":"0ce85b450cd0c5f0d92d9f4043eb828366f60b3434efea16a52d65b5b5a104df"} Feb 18 19:45:01 crc kubenswrapper[4932]: I0218 19:45:01.569501 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-cfzm7" Feb 18 19:45:01 crc kubenswrapper[4932]: I0218 19:45:01.570421 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-9cct4" event={"ID":"f4644100-28b1-4203-bec6-a1c1605468eb","Type":"ContainerStarted","Data":"011a20a1a8376f6ae34b0595328b637ea52712ecf246045b35b3370efdd352bb"} Feb 18 19:45:01 crc kubenswrapper[4932]: I0218 19:45:01.592371 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2rnll" podStartSLOduration=1.894693077 podStartE2EDuration="5.592344006s" podCreationTimestamp="2026-02-18 19:44:56 +0000 UTC" firstStartedPulling="2026-02-18 19:44:57.104093848 +0000 UTC m=+660.686048703" lastFinishedPulling="2026-02-18 19:45:00.801744787 +0000 UTC m=+664.383699632" observedRunningTime="2026-02-18 19:45:01.580250837 +0000 UTC m=+665.162205692" watchObservedRunningTime="2026-02-18 19:45:01.592344006 +0000 UTC m=+665.174298891" Feb 18 19:45:01 crc kubenswrapper[4932]: I0218 19:45:01.603337 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-9cct4" podStartSLOduration=1.69179353 podStartE2EDuration="5.603306016s" podCreationTimestamp="2026-02-18 19:44:56 +0000 UTC" firstStartedPulling="2026-02-18 19:44:56.936063472 +0000 UTC m=+660.518018317" lastFinishedPulling="2026-02-18 19:45:00.847575958 +0000 UTC m=+664.429530803" observedRunningTime="2026-02-18 19:45:01.597280797 +0000 UTC m=+665.179235712" watchObservedRunningTime="2026-02-18 19:45:01.603306016 +0000 UTC m=+665.185260911" Feb 18 19:45:01 crc kubenswrapper[4932]: I0218 19:45:01.623863 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-cfzm7" podStartSLOduration=1.799602521 podStartE2EDuration="5.623833963s" podCreationTimestamp="2026-02-18 19:44:56 +0000 UTC" firstStartedPulling="2026-02-18 19:44:56.982057617 +0000 UTC m=+660.564012452" lastFinishedPulling="2026-02-18 19:45:00.806289059 +0000 UTC m=+664.388243894" observedRunningTime="2026-02-18 19:45:01.618190113 +0000 UTC m=+665.200144958" watchObservedRunningTime="2026-02-18 19:45:01.623833963 +0000 UTC m=+665.205788808" Feb 18 19:45:02 crc kubenswrapper[4932]: I0218 19:45:02.870745 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz" Feb 18 19:45:02 crc kubenswrapper[4932]: I0218 19:45:02.916867 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfnzf\" (UniqueName: \"kubernetes.io/projected/84719922-9618-4293-8f4a-fb525f37eca6-kube-api-access-cfnzf\") pod \"84719922-9618-4293-8f4a-fb525f37eca6\" (UID: \"84719922-9618-4293-8f4a-fb525f37eca6\") " Feb 18 19:45:02 crc kubenswrapper[4932]: I0218 19:45:02.916989 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84719922-9618-4293-8f4a-fb525f37eca6-secret-volume\") pod \"84719922-9618-4293-8f4a-fb525f37eca6\" (UID: \"84719922-9618-4293-8f4a-fb525f37eca6\") " Feb 18 19:45:02 crc kubenswrapper[4932]: I0218 19:45:02.917048 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84719922-9618-4293-8f4a-fb525f37eca6-config-volume\") pod \"84719922-9618-4293-8f4a-fb525f37eca6\" (UID: \"84719922-9618-4293-8f4a-fb525f37eca6\") " Feb 18 19:45:02 crc kubenswrapper[4932]: I0218 19:45:02.918060 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84719922-9618-4293-8f4a-fb525f37eca6-config-volume" (OuterVolumeSpecName: "config-volume") pod "84719922-9618-4293-8f4a-fb525f37eca6" (UID: "84719922-9618-4293-8f4a-fb525f37eca6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:45:02 crc kubenswrapper[4932]: I0218 19:45:02.922908 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84719922-9618-4293-8f4a-fb525f37eca6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "84719922-9618-4293-8f4a-fb525f37eca6" (UID: "84719922-9618-4293-8f4a-fb525f37eca6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:45:02 crc kubenswrapper[4932]: I0218 19:45:02.922968 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84719922-9618-4293-8f4a-fb525f37eca6-kube-api-access-cfnzf" (OuterVolumeSpecName: "kube-api-access-cfnzf") pod "84719922-9618-4293-8f4a-fb525f37eca6" (UID: "84719922-9618-4293-8f4a-fb525f37eca6"). InnerVolumeSpecName "kube-api-access-cfnzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:45:03 crc kubenswrapper[4932]: I0218 19:45:03.019556 4932 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84719922-9618-4293-8f4a-fb525f37eca6-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:03 crc kubenswrapper[4932]: I0218 19:45:03.019599 4932 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84719922-9618-4293-8f4a-fb525f37eca6-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:03 crc kubenswrapper[4932]: I0218 19:45:03.019612 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfnzf\" (UniqueName: \"kubernetes.io/projected/84719922-9618-4293-8f4a-fb525f37eca6-kube-api-access-cfnzf\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:03 crc kubenswrapper[4932]: I0218 19:45:03.583515 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz" event={"ID":"84719922-9618-4293-8f4a-fb525f37eca6","Type":"ContainerDied","Data":"a253c63869ae1908974f0d3775039f12898abfd45ec3648d665b720963bf391a"} Feb 18 19:45:03 crc kubenswrapper[4932]: I0218 19:45:03.583789 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a253c63869ae1908974f0d3775039f12898abfd45ec3648d665b720963bf391a" Feb 18 19:45:03 crc kubenswrapper[4932]: I0218 19:45:03.583591 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.492562 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hbqb5"] Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.495991 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovn-controller" containerID="cri-o://58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545" gracePeriod=30 Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.496091 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="nbdb" containerID="cri-o://6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf" gracePeriod=30 Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.496108 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0" gracePeriod=30 Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.496200 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="northd" containerID="cri-o://4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2" gracePeriod=30 Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.496247 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="sbdb" containerID="cri-o://fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f" gracePeriod=30 Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.496282 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovn-acl-logging" containerID="cri-o://2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5" gracePeriod=30 Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.496281 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="kube-rbac-proxy-node" containerID="cri-o://cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06" gracePeriod=30 Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.530638 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovnkube-controller" containerID="cri-o://bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4" gracePeriod=30 Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.603197 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sj8bg_1b8d80e2-307e-43b6-9003-e77eef51e084/kube-multus/2.log" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.603697 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sj8bg_1b8d80e2-307e-43b6-9003-e77eef51e084/kube-multus/1.log" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.603743 4932 generic.go:334] "Generic (PLEG): container finished" podID="1b8d80e2-307e-43b6-9003-e77eef51e084" containerID="abaae01c3d1488753c134b713c5ac61b4207745b6a2dc1624d7639c5e6d2387b" exitCode=2 Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.603786 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sj8bg" event={"ID":"1b8d80e2-307e-43b6-9003-e77eef51e084","Type":"ContainerDied","Data":"abaae01c3d1488753c134b713c5ac61b4207745b6a2dc1624d7639c5e6d2387b"} Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.603837 4932 scope.go:117] "RemoveContainer" containerID="3e8702ea2a3ccfe6e870f680c6626413f332d89935501738f35ce5a35d33ddda" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.604588 4932 scope.go:117] "RemoveContainer" containerID="abaae01c3d1488753c134b713c5ac61b4207745b6a2dc1624d7639c5e6d2387b" Feb 18 19:45:06 crc kubenswrapper[4932]: E0218 19:45:06.604829 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-sj8bg_openshift-multus(1b8d80e2-307e-43b6-9003-e77eef51e084)\"" pod="openshift-multus/multus-sj8bg" podUID="1b8d80e2-307e-43b6-9003-e77eef51e084" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.700708 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-cfzm7" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.840690 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovnkube-controller/3.log" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.842716 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovn-acl-logging/0.log" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.843199 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovn-controller/0.log" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.843542 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.875241 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-systemd\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.875306 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-var-lib-openvswitch\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.875377 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-ovnkube-config\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.875399 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-kubelet\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.875561 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-run-ovn-kubernetes\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.875684 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-openvswitch\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.875705 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-ovnkube-script-lib\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.875723 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-var-lib-cni-networks-ovn-kubernetes\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.875749 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-slash\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.875980 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.876027 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.876051 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.876299 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/21e3c087-c564-4f66-a656-c92a4e47fa72-ovn-node-metrics-cert\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.876287 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.876340 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.876318 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-systemd-units\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.876367 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-ovn\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.876382 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.876401 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-node-log\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.876754 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-log-socket\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.876779 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-cni-bin\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.876932 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-cni-netd\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.876457 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.876480 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-slash" (OuterVolumeSpecName: "host-slash") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.876501 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.876520 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-node-log" (OuterVolumeSpecName: "node-log") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.877027 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-log-socket" (OuterVolumeSpecName: "log-socket") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.877081 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnfjd\" (UniqueName: \"kubernetes.io/projected/21e3c087-c564-4f66-a656-c92a4e47fa72-kube-api-access-xnfjd\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.877085 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.877110 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-run-netns\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.877289 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-env-overrides\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.877425 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-etc-openvswitch\") pod \"21e3c087-c564-4f66-a656-c92a4e47fa72\" (UID: \"21e3c087-c564-4f66-a656-c92a4e47fa72\") " Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.877127 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.877369 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.877533 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.878091 4932 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-node-log\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.878106 4932 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-log-socket\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.878114 4932 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.878123 4932 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.878131 4932 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.878140 4932 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.878149 4932 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.878157 4932 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.878166 4932 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.878195 4932 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.878203 4932 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.878212 4932 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.878221 4932 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-host-slash\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.878230 4932 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.878238 4932 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.878262 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.878371 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.881463 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21e3c087-c564-4f66-a656-c92a4e47fa72-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.883431 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21e3c087-c564-4f66-a656-c92a4e47fa72-kube-api-access-xnfjd" (OuterVolumeSpecName: "kube-api-access-xnfjd") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "kube-api-access-xnfjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.895250 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "21e3c087-c564-4f66-a656-c92a4e47fa72" (UID: "21e3c087-c564-4f66-a656-c92a4e47fa72"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.909634 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-brc6b"] Feb 18 19:45:06 crc kubenswrapper[4932]: E0218 19:45:06.909968 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="kube-rbac-proxy-node" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.909987 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="kube-rbac-proxy-node" Feb 18 19:45:06 crc kubenswrapper[4932]: E0218 19:45:06.910002 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="kubecfg-setup" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910014 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="kubecfg-setup" Feb 18 19:45:06 crc kubenswrapper[4932]: E0218 19:45:06.910025 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovnkube-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910034 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovnkube-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: E0218 19:45:06.910045 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="sbdb" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910053 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="sbdb" Feb 18 19:45:06 crc kubenswrapper[4932]: E0218 19:45:06.910061 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovnkube-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910069 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovnkube-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: E0218 19:45:06.910080 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovnkube-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910088 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovnkube-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: E0218 19:45:06.910099 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovn-acl-logging" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910110 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovn-acl-logging" Feb 18 19:45:06 crc kubenswrapper[4932]: E0218 19:45:06.910157 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="northd" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910167 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="northd" Feb 18 19:45:06 crc kubenswrapper[4932]: E0218 19:45:06.910192 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910200 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 19:45:06 crc kubenswrapper[4932]: E0218 19:45:06.910210 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovnkube-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910218 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovnkube-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: E0218 19:45:06.910234 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84719922-9618-4293-8f4a-fb525f37eca6" containerName="collect-profiles" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910245 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="84719922-9618-4293-8f4a-fb525f37eca6" containerName="collect-profiles" Feb 18 19:45:06 crc kubenswrapper[4932]: E0218 19:45:06.910257 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovn-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910266 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovn-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: E0218 19:45:06.910280 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="nbdb" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910288 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="nbdb" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910454 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="northd" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910470 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovnkube-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910481 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="84719922-9618-4293-8f4a-fb525f37eca6" containerName="collect-profiles" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910489 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovnkube-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910500 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="kube-rbac-proxy-node" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910510 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovnkube-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910520 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovnkube-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910532 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="nbdb" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910543 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910554 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="sbdb" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910565 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovn-acl-logging" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910576 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovn-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: E0218 19:45:06.910702 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovnkube-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910711 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovnkube-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.910847 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerName="ovnkube-controller" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.913828 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.979690 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-var-lib-openvswitch\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.979771 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1d36c53b-01fa-4726-b231-08718883716e-ovn-node-metrics-cert\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.979808 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-cni-netd\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.979849 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-node-log\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.979891 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-run-netns\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.979977 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-cni-bin\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980010 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980045 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-log-socket\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980072 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1d36c53b-01fa-4726-b231-08718883716e-ovnkube-config\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980107 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-etc-openvswitch\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980148 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1d36c53b-01fa-4726-b231-08718883716e-env-overrides\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980207 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-run-ovn-kubernetes\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980239 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1d36c53b-01fa-4726-b231-08718883716e-ovnkube-script-lib\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980282 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-run-ovn\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980314 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-kubelet\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980344 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-slash\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980375 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v6q4\" (UniqueName: \"kubernetes.io/projected/1d36c53b-01fa-4726-b231-08718883716e-kube-api-access-7v6q4\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980493 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-systemd-units\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980634 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-run-openvswitch\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980656 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-run-systemd\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980709 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnfjd\" (UniqueName: \"kubernetes.io/projected/21e3c087-c564-4f66-a656-c92a4e47fa72-kube-api-access-xnfjd\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980724 4932 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/21e3c087-c564-4f66-a656-c92a4e47fa72-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980733 4932 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980742 4932 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/21e3c087-c564-4f66-a656-c92a4e47fa72-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:06 crc kubenswrapper[4932]: I0218 19:45:06.980751 4932 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/21e3c087-c564-4f66-a656-c92a4e47fa72-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.082517 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-run-systemd\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.082572 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-var-lib-openvswitch\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.082596 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1d36c53b-01fa-4726-b231-08718883716e-ovn-node-metrics-cert\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.082620 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-cni-netd\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.082641 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-node-log\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.082682 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-run-netns\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.082703 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-var-lib-openvswitch\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.082755 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-cni-bin\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.082726 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-cni-bin\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.082778 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-node-log\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.082798 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-run-systemd\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.082837 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-run-netns\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.082884 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-log-socket\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.082750 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-cni-netd\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.082847 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-log-socket\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.083082 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.083130 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.083239 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1d36c53b-01fa-4726-b231-08718883716e-ovnkube-config\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084316 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-etc-openvswitch\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084417 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1d36c53b-01fa-4726-b231-08718883716e-env-overrides\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084450 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-etc-openvswitch\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084470 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-run-ovn-kubernetes\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084507 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1d36c53b-01fa-4726-b231-08718883716e-ovnkube-script-lib\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084594 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1d36c53b-01fa-4726-b231-08718883716e-ovnkube-config\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084612 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-run-ovn-kubernetes\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084614 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-run-ovn\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084665 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-run-ovn\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084723 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-kubelet\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084807 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-kubelet\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084810 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-slash\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084845 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v6q4\" (UniqueName: \"kubernetes.io/projected/1d36c53b-01fa-4726-b231-08718883716e-kube-api-access-7v6q4\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084848 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-host-slash\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084864 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-systemd-units\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084883 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-run-openvswitch\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084931 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-run-openvswitch\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.084954 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1d36c53b-01fa-4726-b231-08718883716e-systemd-units\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.085396 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1d36c53b-01fa-4726-b231-08718883716e-env-overrides\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.086218 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1d36c53b-01fa-4726-b231-08718883716e-ovnkube-script-lib\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.088159 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1d36c53b-01fa-4726-b231-08718883716e-ovn-node-metrics-cert\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.107587 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v6q4\" (UniqueName: \"kubernetes.io/projected/1d36c53b-01fa-4726-b231-08718883716e-kube-api-access-7v6q4\") pod \"ovnkube-node-brc6b\" (UID: \"1d36c53b-01fa-4726-b231-08718883716e\") " pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.239281 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.611703 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovnkube-controller/3.log" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.614786 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovn-acl-logging/0.log" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615307 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hbqb5_21e3c087-c564-4f66-a656-c92a4e47fa72/ovn-controller/0.log" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615657 4932 generic.go:334] "Generic (PLEG): container finished" podID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerID="bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4" exitCode=0 Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615684 4932 generic.go:334] "Generic (PLEG): container finished" podID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerID="fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f" exitCode=0 Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615696 4932 generic.go:334] "Generic (PLEG): container finished" podID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerID="6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf" exitCode=0 Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615708 4932 generic.go:334] "Generic (PLEG): container finished" podID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerID="4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2" exitCode=0 Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615722 4932 generic.go:334] "Generic (PLEG): container finished" podID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerID="6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0" exitCode=0 Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615731 4932 generic.go:334] "Generic (PLEG): container finished" podID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerID="cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06" exitCode=0 Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615734 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615739 4932 generic.go:334] "Generic (PLEG): container finished" podID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerID="2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5" exitCode=143 Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615749 4932 generic.go:334] "Generic (PLEG): container finished" podID="21e3c087-c564-4f66-a656-c92a4e47fa72" containerID="58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545" exitCode=143 Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615693 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerDied","Data":"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615817 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerDied","Data":"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615833 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerDied","Data":"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615846 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerDied","Data":"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615858 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerDied","Data":"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615870 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerDied","Data":"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615882 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615897 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615904 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615911 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615918 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615925 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615932 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615939 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615947 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615956 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerDied","Data":"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615968 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615975 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615982 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615988 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615995 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.615910 4932 scope.go:117] "RemoveContainer" containerID="bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.616001 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617003 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617022 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617033 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617056 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617076 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerDied","Data":"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617100 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617112 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617121 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617131 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617141 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617150 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617159 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617202 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617214 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617223 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617238 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hbqb5" event={"ID":"21e3c087-c564-4f66-a656-c92a4e47fa72","Type":"ContainerDied","Data":"1becd3aaad487cc81da8ef3a1202626206425a186289801483cd534c986b4c0d"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617444 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617462 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617475 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617486 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617497 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617506 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617520 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617532 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617542 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.617552 4932 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.630141 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sj8bg_1b8d80e2-307e-43b6-9003-e77eef51e084/kube-multus/2.log" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.637465 4932 generic.go:334] "Generic (PLEG): container finished" podID="1d36c53b-01fa-4726-b231-08718883716e" containerID="b5ba5acb239bcc85d6fb900e2f5d011076f80bb44198b0898b04a4b5cf088411" exitCode=0 Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.637514 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" event={"ID":"1d36c53b-01fa-4726-b231-08718883716e","Type":"ContainerDied","Data":"b5ba5acb239bcc85d6fb900e2f5d011076f80bb44198b0898b04a4b5cf088411"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.637572 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" event={"ID":"1d36c53b-01fa-4726-b231-08718883716e","Type":"ContainerStarted","Data":"bba935a9383fdf705b50a62b00c562a7cd962d6da05559ee959a51002db4363b"} Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.666038 4932 scope.go:117] "RemoveContainer" containerID="d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.691334 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hbqb5"] Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.693487 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hbqb5"] Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.713245 4932 scope.go:117] "RemoveContainer" containerID="fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.745320 4932 scope.go:117] "RemoveContainer" containerID="6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.762827 4932 scope.go:117] "RemoveContainer" containerID="4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.786731 4932 scope.go:117] "RemoveContainer" containerID="6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.803791 4932 scope.go:117] "RemoveContainer" containerID="cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.819845 4932 scope.go:117] "RemoveContainer" containerID="2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.837619 4932 scope.go:117] "RemoveContainer" containerID="58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.863288 4932 scope.go:117] "RemoveContainer" containerID="4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.938213 4932 scope.go:117] "RemoveContainer" containerID="bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4" Feb 18 19:45:07 crc kubenswrapper[4932]: E0218 19:45:07.938837 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4\": container with ID starting with bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4 not found: ID does not exist" containerID="bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.938889 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4"} err="failed to get container status \"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4\": rpc error: code = NotFound desc = could not find container \"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4\": container with ID starting with bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.938920 4932 scope.go:117] "RemoveContainer" containerID="d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf" Feb 18 19:45:07 crc kubenswrapper[4932]: E0218 19:45:07.939262 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\": container with ID starting with d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf not found: ID does not exist" containerID="d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.939320 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf"} err="failed to get container status \"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\": rpc error: code = NotFound desc = could not find container \"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\": container with ID starting with d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.939356 4932 scope.go:117] "RemoveContainer" containerID="fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f" Feb 18 19:45:07 crc kubenswrapper[4932]: E0218 19:45:07.939680 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\": container with ID starting with fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f not found: ID does not exist" containerID="fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.939710 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f"} err="failed to get container status \"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\": rpc error: code = NotFound desc = could not find container \"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\": container with ID starting with fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.939728 4932 scope.go:117] "RemoveContainer" containerID="6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf" Feb 18 19:45:07 crc kubenswrapper[4932]: E0218 19:45:07.940009 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\": container with ID starting with 6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf not found: ID does not exist" containerID="6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.940042 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf"} err="failed to get container status \"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\": rpc error: code = NotFound desc = could not find container \"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\": container with ID starting with 6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.940062 4932 scope.go:117] "RemoveContainer" containerID="4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2" Feb 18 19:45:07 crc kubenswrapper[4932]: E0218 19:45:07.940637 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\": container with ID starting with 4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2 not found: ID does not exist" containerID="4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.940684 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2"} err="failed to get container status \"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\": rpc error: code = NotFound desc = could not find container \"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\": container with ID starting with 4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.940713 4932 scope.go:117] "RemoveContainer" containerID="6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0" Feb 18 19:45:07 crc kubenswrapper[4932]: E0218 19:45:07.941034 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\": container with ID starting with 6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0 not found: ID does not exist" containerID="6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.941062 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0"} err="failed to get container status \"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\": rpc error: code = NotFound desc = could not find container \"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\": container with ID starting with 6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.941079 4932 scope.go:117] "RemoveContainer" containerID="cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06" Feb 18 19:45:07 crc kubenswrapper[4932]: E0218 19:45:07.941437 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\": container with ID starting with cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06 not found: ID does not exist" containerID="cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.941474 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06"} err="failed to get container status \"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\": rpc error: code = NotFound desc = could not find container \"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\": container with ID starting with cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.941495 4932 scope.go:117] "RemoveContainer" containerID="2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5" Feb 18 19:45:07 crc kubenswrapper[4932]: E0218 19:45:07.941764 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\": container with ID starting with 2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5 not found: ID does not exist" containerID="2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.941787 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5"} err="failed to get container status \"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\": rpc error: code = NotFound desc = could not find container \"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\": container with ID starting with 2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.941804 4932 scope.go:117] "RemoveContainer" containerID="58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545" Feb 18 19:45:07 crc kubenswrapper[4932]: E0218 19:45:07.942615 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\": container with ID starting with 58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545 not found: ID does not exist" containerID="58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.942644 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545"} err="failed to get container status \"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\": rpc error: code = NotFound desc = could not find container \"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\": container with ID starting with 58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.942659 4932 scope.go:117] "RemoveContainer" containerID="4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159" Feb 18 19:45:07 crc kubenswrapper[4932]: E0218 19:45:07.943454 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\": container with ID starting with 4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159 not found: ID does not exist" containerID="4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.943495 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159"} err="failed to get container status \"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\": rpc error: code = NotFound desc = could not find container \"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\": container with ID starting with 4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.943522 4932 scope.go:117] "RemoveContainer" containerID="bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.943815 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4"} err="failed to get container status \"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4\": rpc error: code = NotFound desc = could not find container \"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4\": container with ID starting with bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.943837 4932 scope.go:117] "RemoveContainer" containerID="d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.944042 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf"} err="failed to get container status \"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\": rpc error: code = NotFound desc = could not find container \"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\": container with ID starting with d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.944064 4932 scope.go:117] "RemoveContainer" containerID="fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.944277 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f"} err="failed to get container status \"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\": rpc error: code = NotFound desc = could not find container \"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\": container with ID starting with fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.944301 4932 scope.go:117] "RemoveContainer" containerID="6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.944572 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf"} err="failed to get container status \"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\": rpc error: code = NotFound desc = could not find container \"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\": container with ID starting with 6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.944618 4932 scope.go:117] "RemoveContainer" containerID="4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.944885 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2"} err="failed to get container status \"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\": rpc error: code = NotFound desc = could not find container \"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\": container with ID starting with 4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.944922 4932 scope.go:117] "RemoveContainer" containerID="6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.945254 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0"} err="failed to get container status \"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\": rpc error: code = NotFound desc = could not find container \"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\": container with ID starting with 6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.945287 4932 scope.go:117] "RemoveContainer" containerID="cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.945914 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06"} err="failed to get container status \"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\": rpc error: code = NotFound desc = could not find container \"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\": container with ID starting with cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.945951 4932 scope.go:117] "RemoveContainer" containerID="2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.946334 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5"} err="failed to get container status \"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\": rpc error: code = NotFound desc = could not find container \"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\": container with ID starting with 2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.946363 4932 scope.go:117] "RemoveContainer" containerID="58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.946674 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545"} err="failed to get container status \"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\": rpc error: code = NotFound desc = could not find container \"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\": container with ID starting with 58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.946701 4932 scope.go:117] "RemoveContainer" containerID="4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.946965 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159"} err="failed to get container status \"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\": rpc error: code = NotFound desc = could not find container \"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\": container with ID starting with 4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.947012 4932 scope.go:117] "RemoveContainer" containerID="bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.947358 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4"} err="failed to get container status \"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4\": rpc error: code = NotFound desc = could not find container \"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4\": container with ID starting with bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.947383 4932 scope.go:117] "RemoveContainer" containerID="d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.947707 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf"} err="failed to get container status \"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\": rpc error: code = NotFound desc = could not find container \"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\": container with ID starting with d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.947728 4932 scope.go:117] "RemoveContainer" containerID="fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.947980 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f"} err="failed to get container status \"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\": rpc error: code = NotFound desc = could not find container \"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\": container with ID starting with fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.948001 4932 scope.go:117] "RemoveContainer" containerID="6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.948325 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf"} err="failed to get container status \"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\": rpc error: code = NotFound desc = could not find container \"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\": container with ID starting with 6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.948361 4932 scope.go:117] "RemoveContainer" containerID="4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.948673 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2"} err="failed to get container status \"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\": rpc error: code = NotFound desc = could not find container \"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\": container with ID starting with 4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.948692 4932 scope.go:117] "RemoveContainer" containerID="6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.949280 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0"} err="failed to get container status \"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\": rpc error: code = NotFound desc = could not find container \"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\": container with ID starting with 6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.949305 4932 scope.go:117] "RemoveContainer" containerID="cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.949557 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06"} err="failed to get container status \"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\": rpc error: code = NotFound desc = could not find container \"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\": container with ID starting with cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.949581 4932 scope.go:117] "RemoveContainer" containerID="2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.949964 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5"} err="failed to get container status \"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\": rpc error: code = NotFound desc = could not find container \"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\": container with ID starting with 2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.950018 4932 scope.go:117] "RemoveContainer" containerID="58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.950267 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545"} err="failed to get container status \"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\": rpc error: code = NotFound desc = could not find container \"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\": container with ID starting with 58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.950303 4932 scope.go:117] "RemoveContainer" containerID="4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.950695 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159"} err="failed to get container status \"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\": rpc error: code = NotFound desc = could not find container \"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\": container with ID starting with 4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.950729 4932 scope.go:117] "RemoveContainer" containerID="bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.950951 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4"} err="failed to get container status \"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4\": rpc error: code = NotFound desc = could not find container \"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4\": container with ID starting with bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.950981 4932 scope.go:117] "RemoveContainer" containerID="d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.951237 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf"} err="failed to get container status \"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\": rpc error: code = NotFound desc = could not find container \"d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf\": container with ID starting with d06f101ed1a809c7d85dc9b800e5808bd5a08c910ea2e8ec03f51c936a90f1bf not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.951259 4932 scope.go:117] "RemoveContainer" containerID="fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.951523 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f"} err="failed to get container status \"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\": rpc error: code = NotFound desc = could not find container \"fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f\": container with ID starting with fdbc215bdf60f6b03e60e3edc13b1162b830c4e50903ceb381eabbbf194cda3f not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.951540 4932 scope.go:117] "RemoveContainer" containerID="6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.951782 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf"} err="failed to get container status \"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\": rpc error: code = NotFound desc = could not find container \"6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf\": container with ID starting with 6ef97de2bdeb88915558c55e8a7c5d7af12762786c13e8682a48347b92d52caf not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.951823 4932 scope.go:117] "RemoveContainer" containerID="4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.952063 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2"} err="failed to get container status \"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\": rpc error: code = NotFound desc = could not find container \"4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2\": container with ID starting with 4aeb45db6b5c1a03cf148a3ecbaca9fe119fa15dc0817a741c9377cfb56efed2 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.952100 4932 scope.go:117] "RemoveContainer" containerID="6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.952371 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0"} err="failed to get container status \"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\": rpc error: code = NotFound desc = could not find container \"6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0\": container with ID starting with 6035d66e2306cc496cb12bf5017b546211a7e2be8b8ad6d3a83b13dc2cb020b0 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.952392 4932 scope.go:117] "RemoveContainer" containerID="cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.952598 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06"} err="failed to get container status \"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\": rpc error: code = NotFound desc = could not find container \"cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06\": container with ID starting with cf94949ccb6354f403e73ca6d5ecedc6b820e86e2c48bb795b12ee1a16105b06 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.952636 4932 scope.go:117] "RemoveContainer" containerID="2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.952932 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5"} err="failed to get container status \"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\": rpc error: code = NotFound desc = could not find container \"2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5\": container with ID starting with 2d6399ddc4700f4418f236f4a199274431298ad0ab6c811f1f65c174943d64e5 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.952956 4932 scope.go:117] "RemoveContainer" containerID="58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.953241 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545"} err="failed to get container status \"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\": rpc error: code = NotFound desc = could not find container \"58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545\": container with ID starting with 58b8c323190c67ba8b78e1687c113dcb996fc135f8107cdd5f800112b987c545 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.953278 4932 scope.go:117] "RemoveContainer" containerID="4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.953563 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159"} err="failed to get container status \"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\": rpc error: code = NotFound desc = could not find container \"4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159\": container with ID starting with 4b24829de50b7041c800855ed3a35cdff782cbfc69ee593d5d40e0cbc83a0159 not found: ID does not exist" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.953585 4932 scope.go:117] "RemoveContainer" containerID="bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4" Feb 18 19:45:07 crc kubenswrapper[4932]: I0218 19:45:07.953844 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4"} err="failed to get container status \"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4\": rpc error: code = NotFound desc = could not find container \"bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4\": container with ID starting with bb8420a7312cda39b8b1787f3bf5dd80a8386284c8e4654180225b7fd7c384f4 not found: ID does not exist" Feb 18 19:45:08 crc kubenswrapper[4932]: I0218 19:45:08.647277 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" event={"ID":"1d36c53b-01fa-4726-b231-08718883716e","Type":"ContainerStarted","Data":"6d2d9f18453112b56de9ed1ac2ac35c8901b1fa2bfe02601b889488d7438840b"} Feb 18 19:45:08 crc kubenswrapper[4932]: I0218 19:45:08.647636 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" event={"ID":"1d36c53b-01fa-4726-b231-08718883716e","Type":"ContainerStarted","Data":"0b4b480687787ca2a9a3cc73678a7b092ef5d087e688486de2b4af956b932c47"} Feb 18 19:45:08 crc kubenswrapper[4932]: I0218 19:45:08.647652 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" event={"ID":"1d36c53b-01fa-4726-b231-08718883716e","Type":"ContainerStarted","Data":"6583c4490500dca1d49d0026d96ac215560cc1ee103dbd68360d021fd04deeda"} Feb 18 19:45:08 crc kubenswrapper[4932]: I0218 19:45:08.647665 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" event={"ID":"1d36c53b-01fa-4726-b231-08718883716e","Type":"ContainerStarted","Data":"5d52d4499a1a0e3056b223da93998d7ffa8e218e668c74677b23d239b090e9f9"} Feb 18 19:45:08 crc kubenswrapper[4932]: I0218 19:45:08.647678 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" event={"ID":"1d36c53b-01fa-4726-b231-08718883716e","Type":"ContainerStarted","Data":"a06daccd02c3777381f4b576765d08dd3199be82f57c11c85cb1cf79fe779102"} Feb 18 19:45:08 crc kubenswrapper[4932]: I0218 19:45:08.647690 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" event={"ID":"1d36c53b-01fa-4726-b231-08718883716e","Type":"ContainerStarted","Data":"f592cd19b3958909e2d6b6e71f37dd57cb55b713aa5f3f3df488337b71de5db5"} Feb 18 19:45:09 crc kubenswrapper[4932]: I0218 19:45:09.190581 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21e3c087-c564-4f66-a656-c92a4e47fa72" path="/var/lib/kubelet/pods/21e3c087-c564-4f66-a656-c92a4e47fa72/volumes" Feb 18 19:45:11 crc kubenswrapper[4932]: I0218 19:45:11.676430 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" event={"ID":"1d36c53b-01fa-4726-b231-08718883716e","Type":"ContainerStarted","Data":"c425b4efb38aa2efc9d9629627f2702df758ae1b94d94669a11f71bfe0306d1f"} Feb 18 19:45:13 crc kubenswrapper[4932]: I0218 19:45:13.690974 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" event={"ID":"1d36c53b-01fa-4726-b231-08718883716e","Type":"ContainerStarted","Data":"28204a4c57b4333e8d24a0d941810e9585be3f9271c393cb8887b9724018aae0"} Feb 18 19:45:13 crc kubenswrapper[4932]: I0218 19:45:13.691587 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:13 crc kubenswrapper[4932]: I0218 19:45:13.691602 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:13 crc kubenswrapper[4932]: I0218 19:45:13.746774 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" podStartSLOduration=7.7467580940000005 podStartE2EDuration="7.746758094s" podCreationTimestamp="2026-02-18 19:45:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:45:13.742981791 +0000 UTC m=+677.324936646" watchObservedRunningTime="2026-02-18 19:45:13.746758094 +0000 UTC m=+677.328712939" Feb 18 19:45:13 crc kubenswrapper[4932]: I0218 19:45:13.761694 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:14 crc kubenswrapper[4932]: I0218 19:45:14.698008 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:14 crc kubenswrapper[4932]: I0218 19:45:14.741021 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:21 crc kubenswrapper[4932]: I0218 19:45:21.180261 4932 scope.go:117] "RemoveContainer" containerID="abaae01c3d1488753c134b713c5ac61b4207745b6a2dc1624d7639c5e6d2387b" Feb 18 19:45:21 crc kubenswrapper[4932]: E0218 19:45:21.181330 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-sj8bg_openshift-multus(1b8d80e2-307e-43b6-9003-e77eef51e084)\"" pod="openshift-multus/multus-sj8bg" podUID="1b8d80e2-307e-43b6-9003-e77eef51e084" Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.158421 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm"] Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.162043 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.164824 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.170385 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm"] Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.183609 4932 scope.go:117] "RemoveContainer" containerID="abaae01c3d1488753c134b713c5ac61b4207745b6a2dc1624d7639c5e6d2387b" Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.342674 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2pcw\" (UniqueName: \"kubernetes.io/projected/5f20105a-5425-4620-98f5-8a6ea6dce405-kube-api-access-c2pcw\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm\" (UID: \"5f20105a-5425-4620-98f5-8a6ea6dce405\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.342953 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5f20105a-5425-4620-98f5-8a6ea6dce405-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm\" (UID: \"5f20105a-5425-4620-98f5-8a6ea6dce405\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.342976 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5f20105a-5425-4620-98f5-8a6ea6dce405-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm\" (UID: \"5f20105a-5425-4620-98f5-8a6ea6dce405\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.444372 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2pcw\" (UniqueName: \"kubernetes.io/projected/5f20105a-5425-4620-98f5-8a6ea6dce405-kube-api-access-c2pcw\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm\" (UID: \"5f20105a-5425-4620-98f5-8a6ea6dce405\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.444451 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5f20105a-5425-4620-98f5-8a6ea6dce405-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm\" (UID: \"5f20105a-5425-4620-98f5-8a6ea6dce405\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.444483 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5f20105a-5425-4620-98f5-8a6ea6dce405-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm\" (UID: \"5f20105a-5425-4620-98f5-8a6ea6dce405\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.444879 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5f20105a-5425-4620-98f5-8a6ea6dce405-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm\" (UID: \"5f20105a-5425-4620-98f5-8a6ea6dce405\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.444950 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5f20105a-5425-4620-98f5-8a6ea6dce405-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm\" (UID: \"5f20105a-5425-4620-98f5-8a6ea6dce405\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.485621 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2pcw\" (UniqueName: \"kubernetes.io/projected/5f20105a-5425-4620-98f5-8a6ea6dce405-kube-api-access-c2pcw\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm\" (UID: \"5f20105a-5425-4620-98f5-8a6ea6dce405\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.492221 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:34 crc kubenswrapper[4932]: E0218 19:45:34.529810 4932 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm_openshift-marketplace_5f20105a-5425-4620-98f5-8a6ea6dce405_0(2a78f79085852c40164ec8de2d46f6d902cdaff85b5dd399ad6a2550d56d3e7e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 19:45:34 crc kubenswrapper[4932]: E0218 19:45:34.529885 4932 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm_openshift-marketplace_5f20105a-5425-4620-98f5-8a6ea6dce405_0(2a78f79085852c40164ec8de2d46f6d902cdaff85b5dd399ad6a2550d56d3e7e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:34 crc kubenswrapper[4932]: E0218 19:45:34.529912 4932 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm_openshift-marketplace_5f20105a-5425-4620-98f5-8a6ea6dce405_0(2a78f79085852c40164ec8de2d46f6d902cdaff85b5dd399ad6a2550d56d3e7e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:34 crc kubenswrapper[4932]: E0218 19:45:34.529960 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm_openshift-marketplace(5f20105a-5425-4620-98f5-8a6ea6dce405)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm_openshift-marketplace(5f20105a-5425-4620-98f5-8a6ea6dce405)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm_openshift-marketplace_5f20105a-5425-4620-98f5-8a6ea6dce405_0(2a78f79085852c40164ec8de2d46f6d902cdaff85b5dd399ad6a2550d56d3e7e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" podUID="5f20105a-5425-4620-98f5-8a6ea6dce405" Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.837482 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sj8bg_1b8d80e2-307e-43b6-9003-e77eef51e084/kube-multus/2.log" Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.837582 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.837585 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sj8bg" event={"ID":"1b8d80e2-307e-43b6-9003-e77eef51e084","Type":"ContainerStarted","Data":"6db89007d836797b0d9c8ef0e092b6c971e31acd8912299653558fc5ddef1d9f"} Feb 18 19:45:34 crc kubenswrapper[4932]: I0218 19:45:34.839001 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:34 crc kubenswrapper[4932]: E0218 19:45:34.881520 4932 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm_openshift-marketplace_5f20105a-5425-4620-98f5-8a6ea6dce405_0(7529c016401fdeee0e514313fca1a3a67c35cd628dbc9814acd614ecc8616fe3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 19:45:34 crc kubenswrapper[4932]: E0218 19:45:34.881595 4932 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm_openshift-marketplace_5f20105a-5425-4620-98f5-8a6ea6dce405_0(7529c016401fdeee0e514313fca1a3a67c35cd628dbc9814acd614ecc8616fe3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:34 crc kubenswrapper[4932]: E0218 19:45:34.881622 4932 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm_openshift-marketplace_5f20105a-5425-4620-98f5-8a6ea6dce405_0(7529c016401fdeee0e514313fca1a3a67c35cd628dbc9814acd614ecc8616fe3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:34 crc kubenswrapper[4932]: E0218 19:45:34.881710 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm_openshift-marketplace(5f20105a-5425-4620-98f5-8a6ea6dce405)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm_openshift-marketplace(5f20105a-5425-4620-98f5-8a6ea6dce405)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm_openshift-marketplace_5f20105a-5425-4620-98f5-8a6ea6dce405_0(7529c016401fdeee0e514313fca1a3a67c35cd628dbc9814acd614ecc8616fe3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" podUID="5f20105a-5425-4620-98f5-8a6ea6dce405" Feb 18 19:45:37 crc kubenswrapper[4932]: I0218 19:45:37.274449 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-brc6b" Feb 18 19:45:47 crc kubenswrapper[4932]: I0218 19:45:47.178427 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:47 crc kubenswrapper[4932]: I0218 19:45:47.183588 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:47 crc kubenswrapper[4932]: I0218 19:45:47.585326 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm"] Feb 18 19:45:47 crc kubenswrapper[4932]: W0218 19:45:47.595282 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f20105a_5425_4620_98f5_8a6ea6dce405.slice/crio-8b8444fb77144c09600fe22cfad18194e50dce16534f6a1667c73cf801c08376 WatchSource:0}: Error finding container 8b8444fb77144c09600fe22cfad18194e50dce16534f6a1667c73cf801c08376: Status 404 returned error can't find the container with id 8b8444fb77144c09600fe22cfad18194e50dce16534f6a1667c73cf801c08376 Feb 18 19:45:47 crc kubenswrapper[4932]: I0218 19:45:47.917393 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" event={"ID":"5f20105a-5425-4620-98f5-8a6ea6dce405","Type":"ContainerStarted","Data":"82a8a4080e30730e579c6c585384167b1f2e31f8bf694dbe0b1b55bfb268013a"} Feb 18 19:45:47 crc kubenswrapper[4932]: I0218 19:45:47.917462 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" event={"ID":"5f20105a-5425-4620-98f5-8a6ea6dce405","Type":"ContainerStarted","Data":"8b8444fb77144c09600fe22cfad18194e50dce16534f6a1667c73cf801c08376"} Feb 18 19:45:48 crc kubenswrapper[4932]: I0218 19:45:48.926748 4932 generic.go:334] "Generic (PLEG): container finished" podID="5f20105a-5425-4620-98f5-8a6ea6dce405" containerID="82a8a4080e30730e579c6c585384167b1f2e31f8bf694dbe0b1b55bfb268013a" exitCode=0 Feb 18 19:45:48 crc kubenswrapper[4932]: I0218 19:45:48.926787 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" event={"ID":"5f20105a-5425-4620-98f5-8a6ea6dce405","Type":"ContainerDied","Data":"82a8a4080e30730e579c6c585384167b1f2e31f8bf694dbe0b1b55bfb268013a"} Feb 18 19:45:50 crc kubenswrapper[4932]: I0218 19:45:50.949499 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" event={"ID":"5f20105a-5425-4620-98f5-8a6ea6dce405","Type":"ContainerStarted","Data":"8722dd9b254c200ea891e7e29606c1ea6053477495d75e7f6c622013659a39e9"} Feb 18 19:45:51 crc kubenswrapper[4932]: I0218 19:45:51.967229 4932 generic.go:334] "Generic (PLEG): container finished" podID="5f20105a-5425-4620-98f5-8a6ea6dce405" containerID="8722dd9b254c200ea891e7e29606c1ea6053477495d75e7f6c622013659a39e9" exitCode=0 Feb 18 19:45:51 crc kubenswrapper[4932]: I0218 19:45:51.967314 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" event={"ID":"5f20105a-5425-4620-98f5-8a6ea6dce405","Type":"ContainerDied","Data":"8722dd9b254c200ea891e7e29606c1ea6053477495d75e7f6c622013659a39e9"} Feb 18 19:45:52 crc kubenswrapper[4932]: I0218 19:45:52.977446 4932 generic.go:334] "Generic (PLEG): container finished" podID="5f20105a-5425-4620-98f5-8a6ea6dce405" containerID="e008405b46c42a7fdfb31d3b35fa53f40c7098db97e1e6146ebd3aa20f18820e" exitCode=0 Feb 18 19:45:52 crc kubenswrapper[4932]: I0218 19:45:52.977518 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" event={"ID":"5f20105a-5425-4620-98f5-8a6ea6dce405","Type":"ContainerDied","Data":"e008405b46c42a7fdfb31d3b35fa53f40c7098db97e1e6146ebd3aa20f18820e"} Feb 18 19:45:54 crc kubenswrapper[4932]: I0218 19:45:54.272645 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:54 crc kubenswrapper[4932]: I0218 19:45:54.438787 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2pcw\" (UniqueName: \"kubernetes.io/projected/5f20105a-5425-4620-98f5-8a6ea6dce405-kube-api-access-c2pcw\") pod \"5f20105a-5425-4620-98f5-8a6ea6dce405\" (UID: \"5f20105a-5425-4620-98f5-8a6ea6dce405\") " Feb 18 19:45:54 crc kubenswrapper[4932]: I0218 19:45:54.438976 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5f20105a-5425-4620-98f5-8a6ea6dce405-bundle\") pod \"5f20105a-5425-4620-98f5-8a6ea6dce405\" (UID: \"5f20105a-5425-4620-98f5-8a6ea6dce405\") " Feb 18 19:45:54 crc kubenswrapper[4932]: I0218 19:45:54.439028 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5f20105a-5425-4620-98f5-8a6ea6dce405-util\") pod \"5f20105a-5425-4620-98f5-8a6ea6dce405\" (UID: \"5f20105a-5425-4620-98f5-8a6ea6dce405\") " Feb 18 19:45:54 crc kubenswrapper[4932]: I0218 19:45:54.442993 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f20105a-5425-4620-98f5-8a6ea6dce405-bundle" (OuterVolumeSpecName: "bundle") pod "5f20105a-5425-4620-98f5-8a6ea6dce405" (UID: "5f20105a-5425-4620-98f5-8a6ea6dce405"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:45:54 crc kubenswrapper[4932]: I0218 19:45:54.443837 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f20105a-5425-4620-98f5-8a6ea6dce405-kube-api-access-c2pcw" (OuterVolumeSpecName: "kube-api-access-c2pcw") pod "5f20105a-5425-4620-98f5-8a6ea6dce405" (UID: "5f20105a-5425-4620-98f5-8a6ea6dce405"). InnerVolumeSpecName "kube-api-access-c2pcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:45:54 crc kubenswrapper[4932]: I0218 19:45:54.452946 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f20105a-5425-4620-98f5-8a6ea6dce405-util" (OuterVolumeSpecName: "util") pod "5f20105a-5425-4620-98f5-8a6ea6dce405" (UID: "5f20105a-5425-4620-98f5-8a6ea6dce405"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:45:54 crc kubenswrapper[4932]: I0218 19:45:54.540686 4932 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5f20105a-5425-4620-98f5-8a6ea6dce405-util\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:54 crc kubenswrapper[4932]: I0218 19:45:54.540729 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2pcw\" (UniqueName: \"kubernetes.io/projected/5f20105a-5425-4620-98f5-8a6ea6dce405-kube-api-access-c2pcw\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:54 crc kubenswrapper[4932]: I0218 19:45:54.540772 4932 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5f20105a-5425-4620-98f5-8a6ea6dce405-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:54 crc kubenswrapper[4932]: I0218 19:45:54.995546 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" event={"ID":"5f20105a-5425-4620-98f5-8a6ea6dce405","Type":"ContainerDied","Data":"8b8444fb77144c09600fe22cfad18194e50dce16534f6a1667c73cf801c08376"} Feb 18 19:45:54 crc kubenswrapper[4932]: I0218 19:45:54.995610 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b8444fb77144c09600fe22cfad18194e50dce16534f6a1667c73cf801c08376" Feb 18 19:45:54 crc kubenswrapper[4932]: I0218 19:45:54.995711 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088j7cm" Feb 18 19:45:57 crc kubenswrapper[4932]: I0218 19:45:57.606221 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:45:57 crc kubenswrapper[4932]: I0218 19:45:57.606701 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.436350 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-nsqq5"] Feb 18 19:46:06 crc kubenswrapper[4932]: E0218 19:46:06.437109 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f20105a-5425-4620-98f5-8a6ea6dce405" containerName="util" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.437125 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f20105a-5425-4620-98f5-8a6ea6dce405" containerName="util" Feb 18 19:46:06 crc kubenswrapper[4932]: E0218 19:46:06.437137 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f20105a-5425-4620-98f5-8a6ea6dce405" containerName="extract" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.437144 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f20105a-5425-4620-98f5-8a6ea6dce405" containerName="extract" Feb 18 19:46:06 crc kubenswrapper[4932]: E0218 19:46:06.437219 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f20105a-5425-4620-98f5-8a6ea6dce405" containerName="pull" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.437229 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f20105a-5425-4620-98f5-8a6ea6dce405" containerName="pull" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.437345 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f20105a-5425-4620-98f5-8a6ea6dce405" containerName="extract" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.437730 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nsqq5" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.439767 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.439996 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-zktvq" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.440113 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.440295 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-nsqq5"] Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.539192 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-4qc95"] Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.539818 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-4qc95" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.542934 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.543097 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-69htb" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.553072 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-dswf7"] Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.553759 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-dswf7" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.567769 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-4qc95"] Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.581329 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-dswf7"] Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.593852 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2tft\" (UniqueName: \"kubernetes.io/projected/1d614362-98da-46f5-8874-c5afbd3fa2b8-kube-api-access-r2tft\") pod \"obo-prometheus-operator-68bc856cb9-nsqq5\" (UID: \"1d614362-98da-46f5-8874-c5afbd3fa2b8\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nsqq5" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.695860 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2tft\" (UniqueName: \"kubernetes.io/projected/1d614362-98da-46f5-8874-c5afbd3fa2b8-kube-api-access-r2tft\") pod \"obo-prometheus-operator-68bc856cb9-nsqq5\" (UID: \"1d614362-98da-46f5-8874-c5afbd3fa2b8\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nsqq5" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.696556 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/95581019-de0d-4172-9b8a-765b66064517-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66589544f4-4qc95\" (UID: \"95581019-de0d-4172-9b8a-765b66064517\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-4qc95" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.696679 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0f73debe-8d66-454d-84aa-1559f284bfe0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66589544f4-dswf7\" (UID: \"0f73debe-8d66-454d-84aa-1559f284bfe0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-dswf7" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.698580 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0f73debe-8d66-454d-84aa-1559f284bfe0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66589544f4-dswf7\" (UID: \"0f73debe-8d66-454d-84aa-1559f284bfe0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-dswf7" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.699592 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/95581019-de0d-4172-9b8a-765b66064517-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66589544f4-4qc95\" (UID: \"95581019-de0d-4172-9b8a-765b66064517\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-4qc95" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.723157 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2tft\" (UniqueName: \"kubernetes.io/projected/1d614362-98da-46f5-8874-c5afbd3fa2b8-kube-api-access-r2tft\") pod \"obo-prometheus-operator-68bc856cb9-nsqq5\" (UID: \"1d614362-98da-46f5-8874-c5afbd3fa2b8\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nsqq5" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.743571 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-gk97g"] Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.744223 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-gk97g" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.747845 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-6hblk" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.748148 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.753063 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nsqq5" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.766868 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-gk97g"] Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.802310 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/95581019-de0d-4172-9b8a-765b66064517-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66589544f4-4qc95\" (UID: \"95581019-de0d-4172-9b8a-765b66064517\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-4qc95" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.802351 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0f73debe-8d66-454d-84aa-1559f284bfe0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66589544f4-dswf7\" (UID: \"0f73debe-8d66-454d-84aa-1559f284bfe0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-dswf7" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.802386 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0f73debe-8d66-454d-84aa-1559f284bfe0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66589544f4-dswf7\" (UID: \"0f73debe-8d66-454d-84aa-1559f284bfe0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-dswf7" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.802424 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/95581019-de0d-4172-9b8a-765b66064517-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66589544f4-4qc95\" (UID: \"95581019-de0d-4172-9b8a-765b66064517\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-4qc95" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.807818 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/95581019-de0d-4172-9b8a-765b66064517-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66589544f4-4qc95\" (UID: \"95581019-de0d-4172-9b8a-765b66064517\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-4qc95" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.807872 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0f73debe-8d66-454d-84aa-1559f284bfe0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66589544f4-dswf7\" (UID: \"0f73debe-8d66-454d-84aa-1559f284bfe0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-dswf7" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.808496 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/95581019-de0d-4172-9b8a-765b66064517-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66589544f4-4qc95\" (UID: \"95581019-de0d-4172-9b8a-765b66064517\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-4qc95" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.814961 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0f73debe-8d66-454d-84aa-1559f284bfe0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66589544f4-dswf7\" (UID: \"0f73debe-8d66-454d-84aa-1559f284bfe0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-dswf7" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.850559 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-jnl27"] Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.851406 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-jnl27" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.853260 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-29dxh" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.856758 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-4qc95" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.860668 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-jnl27"] Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.873614 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-dswf7" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.904207 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlxdr\" (UniqueName: \"kubernetes.io/projected/aef7f5d0-1875-434a-a818-cc3c9e633fd2-kube-api-access-vlxdr\") pod \"observability-operator-59bdc8b94-gk97g\" (UID: \"aef7f5d0-1875-434a-a818-cc3c9e633fd2\") " pod="openshift-operators/observability-operator-59bdc8b94-gk97g" Feb 18 19:46:06 crc kubenswrapper[4932]: I0218 19:46:06.904493 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/aef7f5d0-1875-434a-a818-cc3c9e633fd2-observability-operator-tls\") pod \"observability-operator-59bdc8b94-gk97g\" (UID: \"aef7f5d0-1875-434a-a818-cc3c9e633fd2\") " pod="openshift-operators/observability-operator-59bdc8b94-gk97g" Feb 18 19:46:07 crc kubenswrapper[4932]: I0218 19:46:07.006078 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/d6c85304-6fdd-4763-90cb-5a1f61318fd9-openshift-service-ca\") pod \"perses-operator-5bf474d74f-jnl27\" (UID: \"d6c85304-6fdd-4763-90cb-5a1f61318fd9\") " pod="openshift-operators/perses-operator-5bf474d74f-jnl27" Feb 18 19:46:07 crc kubenswrapper[4932]: I0218 19:46:07.006135 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlxdr\" (UniqueName: \"kubernetes.io/projected/aef7f5d0-1875-434a-a818-cc3c9e633fd2-kube-api-access-vlxdr\") pod \"observability-operator-59bdc8b94-gk97g\" (UID: \"aef7f5d0-1875-434a-a818-cc3c9e633fd2\") " pod="openshift-operators/observability-operator-59bdc8b94-gk97g" Feb 18 19:46:07 crc kubenswrapper[4932]: I0218 19:46:07.006159 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/aef7f5d0-1875-434a-a818-cc3c9e633fd2-observability-operator-tls\") pod \"observability-operator-59bdc8b94-gk97g\" (UID: \"aef7f5d0-1875-434a-a818-cc3c9e633fd2\") " pod="openshift-operators/observability-operator-59bdc8b94-gk97g" Feb 18 19:46:07 crc kubenswrapper[4932]: I0218 19:46:07.006201 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9fzg\" (UniqueName: \"kubernetes.io/projected/d6c85304-6fdd-4763-90cb-5a1f61318fd9-kube-api-access-v9fzg\") pod \"perses-operator-5bf474d74f-jnl27\" (UID: \"d6c85304-6fdd-4763-90cb-5a1f61318fd9\") " pod="openshift-operators/perses-operator-5bf474d74f-jnl27" Feb 18 19:46:07 crc kubenswrapper[4932]: I0218 19:46:07.010907 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/aef7f5d0-1875-434a-a818-cc3c9e633fd2-observability-operator-tls\") pod \"observability-operator-59bdc8b94-gk97g\" (UID: \"aef7f5d0-1875-434a-a818-cc3c9e633fd2\") " pod="openshift-operators/observability-operator-59bdc8b94-gk97g" Feb 18 19:46:07 crc kubenswrapper[4932]: I0218 19:46:07.020475 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlxdr\" (UniqueName: \"kubernetes.io/projected/aef7f5d0-1875-434a-a818-cc3c9e633fd2-kube-api-access-vlxdr\") pod \"observability-operator-59bdc8b94-gk97g\" (UID: \"aef7f5d0-1875-434a-a818-cc3c9e633fd2\") " pod="openshift-operators/observability-operator-59bdc8b94-gk97g" Feb 18 19:46:07 crc kubenswrapper[4932]: I0218 19:46:07.067197 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-gk97g" Feb 18 19:46:07 crc kubenswrapper[4932]: I0218 19:46:07.087561 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-nsqq5"] Feb 18 19:46:07 crc kubenswrapper[4932]: I0218 19:46:07.107627 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9fzg\" (UniqueName: \"kubernetes.io/projected/d6c85304-6fdd-4763-90cb-5a1f61318fd9-kube-api-access-v9fzg\") pod \"perses-operator-5bf474d74f-jnl27\" (UID: \"d6c85304-6fdd-4763-90cb-5a1f61318fd9\") " pod="openshift-operators/perses-operator-5bf474d74f-jnl27" Feb 18 19:46:07 crc kubenswrapper[4932]: I0218 19:46:07.107728 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/d6c85304-6fdd-4763-90cb-5a1f61318fd9-openshift-service-ca\") pod \"perses-operator-5bf474d74f-jnl27\" (UID: \"d6c85304-6fdd-4763-90cb-5a1f61318fd9\") " pod="openshift-operators/perses-operator-5bf474d74f-jnl27" Feb 18 19:46:07 crc kubenswrapper[4932]: I0218 19:46:07.108615 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/d6c85304-6fdd-4763-90cb-5a1f61318fd9-openshift-service-ca\") pod \"perses-operator-5bf474d74f-jnl27\" (UID: \"d6c85304-6fdd-4763-90cb-5a1f61318fd9\") " pod="openshift-operators/perses-operator-5bf474d74f-jnl27" Feb 18 19:46:07 crc kubenswrapper[4932]: I0218 19:46:07.136142 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9fzg\" (UniqueName: \"kubernetes.io/projected/d6c85304-6fdd-4763-90cb-5a1f61318fd9-kube-api-access-v9fzg\") pod \"perses-operator-5bf474d74f-jnl27\" (UID: \"d6c85304-6fdd-4763-90cb-5a1f61318fd9\") " pod="openshift-operators/perses-operator-5bf474d74f-jnl27" Feb 18 19:46:07 crc kubenswrapper[4932]: I0218 19:46:07.172077 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-jnl27" Feb 18 19:46:07 crc kubenswrapper[4932]: I0218 19:46:07.208320 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-4qc95"] Feb 18 19:46:07 crc kubenswrapper[4932]: W0218 19:46:07.210311 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95581019_de0d_4172_9b8a_765b66064517.slice/crio-32347782cb024591c5b4b56416c524f63377b533bfb9cfeb7f0afbbbf48a574c WatchSource:0}: Error finding container 32347782cb024591c5b4b56416c524f63377b533bfb9cfeb7f0afbbbf48a574c: Status 404 returned error can't find the container with id 32347782cb024591c5b4b56416c524f63377b533bfb9cfeb7f0afbbbf48a574c Feb 18 19:46:07 crc kubenswrapper[4932]: I0218 19:46:07.222396 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-dswf7"] Feb 18 19:46:07 crc kubenswrapper[4932]: W0218 19:46:07.228972 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f73debe_8d66_454d_84aa_1559f284bfe0.slice/crio-16ad7e149449f170c6cadf24108d91741fe7d164125b320fb4afd8779629d7a2 WatchSource:0}: Error finding container 16ad7e149449f170c6cadf24108d91741fe7d164125b320fb4afd8779629d7a2: Status 404 returned error can't find the container with id 16ad7e149449f170c6cadf24108d91741fe7d164125b320fb4afd8779629d7a2 Feb 18 19:46:07 crc kubenswrapper[4932]: I0218 19:46:07.410607 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-jnl27"] Feb 18 19:46:07 crc kubenswrapper[4932]: W0218 19:46:07.417015 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6c85304_6fdd_4763_90cb_5a1f61318fd9.slice/crio-f6aa05cfeaa10b2511f9b20816c707c6711f2e99d4f77a95de0de7903ffa1d8d WatchSource:0}: Error finding container f6aa05cfeaa10b2511f9b20816c707c6711f2e99d4f77a95de0de7903ffa1d8d: Status 404 returned error can't find the container with id f6aa05cfeaa10b2511f9b20816c707c6711f2e99d4f77a95de0de7903ffa1d8d Feb 18 19:46:07 crc kubenswrapper[4932]: I0218 19:46:07.536277 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-gk97g"] Feb 18 19:46:07 crc kubenswrapper[4932]: W0218 19:46:07.544438 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaef7f5d0_1875_434a_a818_cc3c9e633fd2.slice/crio-d461a94c40f2179fe647346f134811d33bb4912dbf1fc116282065326ca2693b WatchSource:0}: Error finding container d461a94c40f2179fe647346f134811d33bb4912dbf1fc116282065326ca2693b: Status 404 returned error can't find the container with id d461a94c40f2179fe647346f134811d33bb4912dbf1fc116282065326ca2693b Feb 18 19:46:08 crc kubenswrapper[4932]: I0218 19:46:08.092317 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-4qc95" event={"ID":"95581019-de0d-4172-9b8a-765b66064517","Type":"ContainerStarted","Data":"32347782cb024591c5b4b56416c524f63377b533bfb9cfeb7f0afbbbf48a574c"} Feb 18 19:46:08 crc kubenswrapper[4932]: I0218 19:46:08.093361 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-dswf7" event={"ID":"0f73debe-8d66-454d-84aa-1559f284bfe0","Type":"ContainerStarted","Data":"16ad7e149449f170c6cadf24108d91741fe7d164125b320fb4afd8779629d7a2"} Feb 18 19:46:08 crc kubenswrapper[4932]: I0218 19:46:08.094306 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nsqq5" event={"ID":"1d614362-98da-46f5-8874-c5afbd3fa2b8","Type":"ContainerStarted","Data":"c565d0b24f3b065325fba0acef142b3e4ba0b505100c5e470aac7e1e822c0c6e"} Feb 18 19:46:08 crc kubenswrapper[4932]: I0218 19:46:08.095011 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-jnl27" event={"ID":"d6c85304-6fdd-4763-90cb-5a1f61318fd9","Type":"ContainerStarted","Data":"f6aa05cfeaa10b2511f9b20816c707c6711f2e99d4f77a95de0de7903ffa1d8d"} Feb 18 19:46:08 crc kubenswrapper[4932]: I0218 19:46:08.095788 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-gk97g" event={"ID":"aef7f5d0-1875-434a-a818-cc3c9e633fd2","Type":"ContainerStarted","Data":"d461a94c40f2179fe647346f134811d33bb4912dbf1fc116282065326ca2693b"} Feb 18 19:46:23 crc kubenswrapper[4932]: E0218 19:46:23.472520 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c" Feb 18 19:46:23 crc kubenswrapper[4932]: E0218 19:46:23.473214 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c,Command:[],Args:[--namespace=$(NAMESPACE) --images=perses=$(RELATED_IMAGE_PERSES) --images=alertmanager=$(RELATED_IMAGE_ALERTMANAGER) --images=prometheus=$(RELATED_IMAGE_PROMETHEUS) --images=thanos=$(RELATED_IMAGE_THANOS) --images=ui-dashboards=$(RELATED_IMAGE_CONSOLE_DASHBOARDS_PLUGIN) --images=ui-distributed-tracing=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN) --images=ui-distributed-tracing-pf5=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF5) --images=ui-distributed-tracing-pf4=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF4) --images=ui-logging=$(RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN) --images=ui-logging-pf4=$(RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN_PF4) --images=ui-troubleshooting-panel=$(RELATED_IMAGE_CONSOLE_TROUBLESHOOTING_PANEL_PLUGIN) --images=ui-monitoring=$(RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN) --images=ui-monitoring-pf5=$(RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN_PF5) --images=korrel8r=$(RELATED_IMAGE_KORREL8R) --images=health-analyzer=$(RELATED_IMAGE_CLUSTER_HEALTH_ANALYZER) --openshift.enabled=true],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:RELATED_IMAGE_ALERTMANAGER,Value:registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS,Value:registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_THANOS,Value:registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PERSES,Value:registry.redhat.io/cluster-observability-operator/perses-rhel9@sha256:e797cdb47beef40b04da7b6d645bca3dc32e6247003c45b56b38efd9e13bf01c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DASHBOARDS_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-rhel9@sha256:7d662a120305e2528acc7e9142b770b5b6a7f4932ddfcadfa4ac953935124895,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF5,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-pf5-rhel9@sha256:75465aabb0aa427a5c531a8fcde463f6d119afbcc618ebcbf6b7ee9bc8aad160,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF4,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-pf4-rhel9@sha256:dc18c8d6a4a9a0a574a57cc5082c8a9b26023bd6d69b9732892d584c1dfe5070,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/logging-console-plugin-rhel9@sha256:369729978cecdc13c99ef3d179f8eb8a450a4a0cb70b63c27a55a15d1710ba27,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN_PF4,Value:registry.redhat.io/cluster-observability-operator/logging-console-plugin-pf4-rhel9@sha256:d8c7a61d147f62b204d5c5f16864386025393453c9a81ea327bbd25d7765d611,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_TROUBLESHOOTING_PANEL_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/troubleshooting-panel-console-plugin-rhel9@sha256:b4a6eb1cc118a4334b424614959d8b7f361ddd779b3a72690ca49b0a3f26d9b8,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/monitoring-console-plugin-rhel9@sha256:21d4fff670893ba4b7fbc528cd49f8b71c8281cede9ef84f0697065bb6a7fc50,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN_PF5,Value:registry.redhat.io/cluster-observability-operator/monitoring-console-plugin-pf5-rhel9@sha256:12d9dbe297a1c3b9df671f21156992082bc483887d851fafe76e5d17321ff474,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KORREL8R,Value:registry.redhat.io/cluster-observability-operator/korrel8r-rhel9@sha256:e65c37f04f6d76a0cbfe05edb3cddf6a8f14f859ee35cf3aebea8fcb991d2c19,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLUSTER_HEALTH_ANALYZER,Value:registry.redhat.io/cluster-observability-operator/cluster-health-analyzer-rhel9@sha256:48e4e178c6eeaa9d5dd77a591c185a311b4b4a5caadb7199d48463123e31dc9e,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{400 -3} {} 400m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:observability-operator-tls,ReadOnly:true,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vlxdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod observability-operator-59bdc8b94-gk97g_openshift-operators(aef7f5d0-1875-434a-a818-cc3c9e633fd2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:46:23 crc kubenswrapper[4932]: E0218 19:46:23.474561 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/observability-operator-59bdc8b94-gk97g" podUID="aef7f5d0-1875-434a-a818-cc3c9e633fd2" Feb 18 19:46:24 crc kubenswrapper[4932]: I0218 19:46:24.214436 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nsqq5" event={"ID":"1d614362-98da-46f5-8874-c5afbd3fa2b8","Type":"ContainerStarted","Data":"1beee27f9333a46f30214fd7417c40bf44e5f148744ce3947732abbf222044d7"} Feb 18 19:46:24 crc kubenswrapper[4932]: I0218 19:46:24.220696 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-jnl27" event={"ID":"d6c85304-6fdd-4763-90cb-5a1f61318fd9","Type":"ContainerStarted","Data":"e4210edaee16a6f0ff19c833afffd3fd3218cd2b660956ef5d2e461dedff7d7f"} Feb 18 19:46:24 crc kubenswrapper[4932]: I0218 19:46:24.220823 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-jnl27" Feb 18 19:46:24 crc kubenswrapper[4932]: I0218 19:46:24.222730 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-4qc95" event={"ID":"95581019-de0d-4172-9b8a-765b66064517","Type":"ContainerStarted","Data":"9886150bbae0e3e1b23ad3029dfb5ba511369eebf9bae89e9fe9f27a0f5e5110"} Feb 18 19:46:24 crc kubenswrapper[4932]: I0218 19:46:24.224023 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-dswf7" event={"ID":"0f73debe-8d66-454d-84aa-1559f284bfe0","Type":"ContainerStarted","Data":"b0b36dbceac1e72b55713caad25471d2e55dae7c0b661105d9033e42a9f6d8a8"} Feb 18 19:46:24 crc kubenswrapper[4932]: E0218 19:46:24.225508 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c\\\"\"" pod="openshift-operators/observability-operator-59bdc8b94-gk97g" podUID="aef7f5d0-1875-434a-a818-cc3c9e633fd2" Feb 18 19:46:24 crc kubenswrapper[4932]: I0218 19:46:24.232107 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nsqq5" podStartSLOduration=1.8355237 podStartE2EDuration="18.232093107s" podCreationTimestamp="2026-02-18 19:46:06 +0000 UTC" firstStartedPulling="2026-02-18 19:46:07.11989181 +0000 UTC m=+730.701846655" lastFinishedPulling="2026-02-18 19:46:23.516461207 +0000 UTC m=+747.098416062" observedRunningTime="2026-02-18 19:46:24.230922108 +0000 UTC m=+747.812876953" watchObservedRunningTime="2026-02-18 19:46:24.232093107 +0000 UTC m=+747.814047952" Feb 18 19:46:24 crc kubenswrapper[4932]: I0218 19:46:24.254852 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-4qc95" podStartSLOduration=1.95211917 podStartE2EDuration="18.254834828s" podCreationTimestamp="2026-02-18 19:46:06 +0000 UTC" firstStartedPulling="2026-02-18 19:46:07.213103173 +0000 UTC m=+730.795058028" lastFinishedPulling="2026-02-18 19:46:23.515818801 +0000 UTC m=+747.097773686" observedRunningTime="2026-02-18 19:46:24.25247826 +0000 UTC m=+747.834433105" watchObservedRunningTime="2026-02-18 19:46:24.254834828 +0000 UTC m=+747.836789673" Feb 18 19:46:24 crc kubenswrapper[4932]: I0218 19:46:24.283372 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66589544f4-dswf7" podStartSLOduration=1.986008778 podStartE2EDuration="18.283358923s" podCreationTimestamp="2026-02-18 19:46:06 +0000 UTC" firstStartedPulling="2026-02-18 19:46:07.234284287 +0000 UTC m=+730.816239132" lastFinishedPulling="2026-02-18 19:46:23.531634422 +0000 UTC m=+747.113589277" observedRunningTime="2026-02-18 19:46:24.281815985 +0000 UTC m=+747.863770830" watchObservedRunningTime="2026-02-18 19:46:24.283358923 +0000 UTC m=+747.865313768" Feb 18 19:46:24 crc kubenswrapper[4932]: I0218 19:46:24.339568 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-jnl27" podStartSLOduration=2.242927495 podStartE2EDuration="18.339552581s" podCreationTimestamp="2026-02-18 19:46:06 +0000 UTC" firstStartedPulling="2026-02-18 19:46:07.418773505 +0000 UTC m=+731.000728350" lastFinishedPulling="2026-02-18 19:46:23.515398551 +0000 UTC m=+747.097353436" observedRunningTime="2026-02-18 19:46:24.339006468 +0000 UTC m=+747.920961313" watchObservedRunningTime="2026-02-18 19:46:24.339552581 +0000 UTC m=+747.921507426" Feb 18 19:46:27 crc kubenswrapper[4932]: I0218 19:46:27.606460 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:46:27 crc kubenswrapper[4932]: I0218 19:46:27.606714 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:46:30 crc kubenswrapper[4932]: I0218 19:46:30.767254 4932 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 19:46:37 crc kubenswrapper[4932]: I0218 19:46:37.174923 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-jnl27" Feb 18 19:46:38 crc kubenswrapper[4932]: I0218 19:46:38.327343 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-gk97g" event={"ID":"aef7f5d0-1875-434a-a818-cc3c9e633fd2","Type":"ContainerStarted","Data":"e348e292221ea402acd6c97879f83914f45375b5d839485b5ac139ae935d29bb"} Feb 18 19:46:38 crc kubenswrapper[4932]: I0218 19:46:38.327943 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-gk97g" Feb 18 19:46:38 crc kubenswrapper[4932]: I0218 19:46:38.330256 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-gk97g" Feb 18 19:46:38 crc kubenswrapper[4932]: I0218 19:46:38.375966 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-gk97g" podStartSLOduration=2.521598531 podStartE2EDuration="32.375946808s" podCreationTimestamp="2026-02-18 19:46:06 +0000 UTC" firstStartedPulling="2026-02-18 19:46:07.550491489 +0000 UTC m=+731.132446334" lastFinishedPulling="2026-02-18 19:46:37.404839746 +0000 UTC m=+760.986794611" observedRunningTime="2026-02-18 19:46:38.349555906 +0000 UTC m=+761.931510751" watchObservedRunningTime="2026-02-18 19:46:38.375946808 +0000 UTC m=+761.957901653" Feb 18 19:46:53 crc kubenswrapper[4932]: I0218 19:46:53.491217 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-khmxv"] Feb 18 19:46:53 crc kubenswrapper[4932]: I0218 19:46:53.493060 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khmxv" Feb 18 19:46:53 crc kubenswrapper[4932]: I0218 19:46:53.502577 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-khmxv"] Feb 18 19:46:53 crc kubenswrapper[4932]: I0218 19:46:53.507543 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrxgd\" (UniqueName: \"kubernetes.io/projected/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-kube-api-access-vrxgd\") pod \"community-operators-khmxv\" (UID: \"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6\") " pod="openshift-marketplace/community-operators-khmxv" Feb 18 19:46:53 crc kubenswrapper[4932]: I0218 19:46:53.507614 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-catalog-content\") pod \"community-operators-khmxv\" (UID: \"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6\") " pod="openshift-marketplace/community-operators-khmxv" Feb 18 19:46:53 crc kubenswrapper[4932]: I0218 19:46:53.507706 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-utilities\") pod \"community-operators-khmxv\" (UID: \"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6\") " pod="openshift-marketplace/community-operators-khmxv" Feb 18 19:46:53 crc kubenswrapper[4932]: I0218 19:46:53.609661 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-catalog-content\") pod \"community-operators-khmxv\" (UID: \"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6\") " pod="openshift-marketplace/community-operators-khmxv" Feb 18 19:46:53 crc kubenswrapper[4932]: I0218 19:46:53.609792 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrxgd\" (UniqueName: \"kubernetes.io/projected/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-kube-api-access-vrxgd\") pod \"community-operators-khmxv\" (UID: \"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6\") " pod="openshift-marketplace/community-operators-khmxv" Feb 18 19:46:53 crc kubenswrapper[4932]: I0218 19:46:53.609885 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-utilities\") pod \"community-operators-khmxv\" (UID: \"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6\") " pod="openshift-marketplace/community-operators-khmxv" Feb 18 19:46:53 crc kubenswrapper[4932]: I0218 19:46:53.610865 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-utilities\") pod \"community-operators-khmxv\" (UID: \"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6\") " pod="openshift-marketplace/community-operators-khmxv" Feb 18 19:46:53 crc kubenswrapper[4932]: I0218 19:46:53.611028 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-catalog-content\") pod \"community-operators-khmxv\" (UID: \"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6\") " pod="openshift-marketplace/community-operators-khmxv" Feb 18 19:46:53 crc kubenswrapper[4932]: I0218 19:46:53.636692 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrxgd\" (UniqueName: \"kubernetes.io/projected/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-kube-api-access-vrxgd\") pod \"community-operators-khmxv\" (UID: \"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6\") " pod="openshift-marketplace/community-operators-khmxv" Feb 18 19:46:53 crc kubenswrapper[4932]: I0218 19:46:53.811876 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khmxv" Feb 18 19:46:54 crc kubenswrapper[4932]: I0218 19:46:54.249018 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-khmxv"] Feb 18 19:46:54 crc kubenswrapper[4932]: I0218 19:46:54.422486 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khmxv" event={"ID":"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6","Type":"ContainerStarted","Data":"4e27e9c63915178a1013148fd8c27d21f8d6ff07ebec24244d082f831b9b799a"} Feb 18 19:46:54 crc kubenswrapper[4932]: I0218 19:46:54.422526 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khmxv" event={"ID":"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6","Type":"ContainerStarted","Data":"d2d248bc96ff686741bc9d18c111e029f66e7b55a12534b4aee09014a335d602"} Feb 18 19:46:55 crc kubenswrapper[4932]: I0218 19:46:55.430204 4932 generic.go:334] "Generic (PLEG): container finished" podID="a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6" containerID="4e27e9c63915178a1013148fd8c27d21f8d6ff07ebec24244d082f831b9b799a" exitCode=0 Feb 18 19:46:55 crc kubenswrapper[4932]: I0218 19:46:55.430373 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khmxv" event={"ID":"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6","Type":"ContainerDied","Data":"4e27e9c63915178a1013148fd8c27d21f8d6ff07ebec24244d082f831b9b799a"} Feb 18 19:46:55 crc kubenswrapper[4932]: I0218 19:46:55.430529 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khmxv" event={"ID":"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6","Type":"ContainerStarted","Data":"e88a66981dc355681048566540ba21d2b80b46a376a5c1da04f5047ddb9643ed"} Feb 18 19:46:55 crc kubenswrapper[4932]: I0218 19:46:55.902057 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b"] Feb 18 19:46:55 crc kubenswrapper[4932]: I0218 19:46:55.903835 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" Feb 18 19:46:55 crc kubenswrapper[4932]: I0218 19:46:55.906482 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 19:46:55 crc kubenswrapper[4932]: I0218 19:46:55.917741 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b"] Feb 18 19:46:56 crc kubenswrapper[4932]: I0218 19:46:56.038474 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0698d2a5-118e-4c2b-8325-875aab6bdc97-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b\" (UID: \"0698d2a5-118e-4c2b-8325-875aab6bdc97\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" Feb 18 19:46:56 crc kubenswrapper[4932]: I0218 19:46:56.038617 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwr4n\" (UniqueName: \"kubernetes.io/projected/0698d2a5-118e-4c2b-8325-875aab6bdc97-kube-api-access-jwr4n\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b\" (UID: \"0698d2a5-118e-4c2b-8325-875aab6bdc97\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" Feb 18 19:46:56 crc kubenswrapper[4932]: I0218 19:46:56.038723 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0698d2a5-118e-4c2b-8325-875aab6bdc97-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b\" (UID: \"0698d2a5-118e-4c2b-8325-875aab6bdc97\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" Feb 18 19:46:56 crc kubenswrapper[4932]: I0218 19:46:56.139582 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0698d2a5-118e-4c2b-8325-875aab6bdc97-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b\" (UID: \"0698d2a5-118e-4c2b-8325-875aab6bdc97\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" Feb 18 19:46:56 crc kubenswrapper[4932]: I0218 19:46:56.139774 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0698d2a5-118e-4c2b-8325-875aab6bdc97-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b\" (UID: \"0698d2a5-118e-4c2b-8325-875aab6bdc97\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" Feb 18 19:46:56 crc kubenswrapper[4932]: I0218 19:46:56.139849 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwr4n\" (UniqueName: \"kubernetes.io/projected/0698d2a5-118e-4c2b-8325-875aab6bdc97-kube-api-access-jwr4n\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b\" (UID: \"0698d2a5-118e-4c2b-8325-875aab6bdc97\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" Feb 18 19:46:56 crc kubenswrapper[4932]: I0218 19:46:56.140132 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0698d2a5-118e-4c2b-8325-875aab6bdc97-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b\" (UID: \"0698d2a5-118e-4c2b-8325-875aab6bdc97\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" Feb 18 19:46:56 crc kubenswrapper[4932]: I0218 19:46:56.140236 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0698d2a5-118e-4c2b-8325-875aab6bdc97-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b\" (UID: \"0698d2a5-118e-4c2b-8325-875aab6bdc97\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" Feb 18 19:46:56 crc kubenswrapper[4932]: I0218 19:46:56.168212 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwr4n\" (UniqueName: \"kubernetes.io/projected/0698d2a5-118e-4c2b-8325-875aab6bdc97-kube-api-access-jwr4n\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b\" (UID: \"0698d2a5-118e-4c2b-8325-875aab6bdc97\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" Feb 18 19:46:56 crc kubenswrapper[4932]: I0218 19:46:56.219090 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" Feb 18 19:46:56 crc kubenswrapper[4932]: I0218 19:46:56.395562 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b"] Feb 18 19:46:56 crc kubenswrapper[4932]: W0218 19:46:56.399783 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0698d2a5_118e_4c2b_8325_875aab6bdc97.slice/crio-14e8d6b9c5e954c4f70f81d07cfbf0c8db1088da3b9a2ce650a28b6a0a97a38e WatchSource:0}: Error finding container 14e8d6b9c5e954c4f70f81d07cfbf0c8db1088da3b9a2ce650a28b6a0a97a38e: Status 404 returned error can't find the container with id 14e8d6b9c5e954c4f70f81d07cfbf0c8db1088da3b9a2ce650a28b6a0a97a38e Feb 18 19:46:56 crc kubenswrapper[4932]: I0218 19:46:56.437072 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" event={"ID":"0698d2a5-118e-4c2b-8325-875aab6bdc97","Type":"ContainerStarted","Data":"14e8d6b9c5e954c4f70f81d07cfbf0c8db1088da3b9a2ce650a28b6a0a97a38e"} Feb 18 19:46:56 crc kubenswrapper[4932]: I0218 19:46:56.439151 4932 generic.go:334] "Generic (PLEG): container finished" podID="a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6" containerID="e88a66981dc355681048566540ba21d2b80b46a376a5c1da04f5047ddb9643ed" exitCode=0 Feb 18 19:46:56 crc kubenswrapper[4932]: I0218 19:46:56.439200 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khmxv" event={"ID":"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6","Type":"ContainerDied","Data":"e88a66981dc355681048566540ba21d2b80b46a376a5c1da04f5047ddb9643ed"} Feb 18 19:46:57 crc kubenswrapper[4932]: I0218 19:46:57.447027 4932 generic.go:334] "Generic (PLEG): container finished" podID="0698d2a5-118e-4c2b-8325-875aab6bdc97" containerID="79642c89b2ecf4466d565ea49dfa91f23064e31795a11ae1577cb45f3de77ea8" exitCode=0 Feb 18 19:46:57 crc kubenswrapper[4932]: I0218 19:46:57.447073 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" event={"ID":"0698d2a5-118e-4c2b-8325-875aab6bdc97","Type":"ContainerDied","Data":"79642c89b2ecf4466d565ea49dfa91f23064e31795a11ae1577cb45f3de77ea8"} Feb 18 19:46:57 crc kubenswrapper[4932]: I0218 19:46:57.453366 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khmxv" event={"ID":"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6","Type":"ContainerStarted","Data":"07618326fa18de234030563d641197cae3e4d2f25e999e02de6190d757598673"} Feb 18 19:46:57 crc kubenswrapper[4932]: I0218 19:46:57.503685 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-khmxv" podStartSLOduration=2.093577272 podStartE2EDuration="4.503664587s" podCreationTimestamp="2026-02-18 19:46:53 +0000 UTC" firstStartedPulling="2026-02-18 19:46:54.424649639 +0000 UTC m=+778.006604484" lastFinishedPulling="2026-02-18 19:46:56.834736924 +0000 UTC m=+780.416691799" observedRunningTime="2026-02-18 19:46:57.497925355 +0000 UTC m=+781.079880210" watchObservedRunningTime="2026-02-18 19:46:57.503664587 +0000 UTC m=+781.085619442" Feb 18 19:46:57 crc kubenswrapper[4932]: I0218 19:46:57.606417 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:46:57 crc kubenswrapper[4932]: I0218 19:46:57.606476 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:46:57 crc kubenswrapper[4932]: I0218 19:46:57.606520 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:46:57 crc kubenswrapper[4932]: I0218 19:46:57.607148 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f3b543e6ec63bdf78c858f95870e024438d65d986dd0f72b674fc74756af06be"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:46:57 crc kubenswrapper[4932]: I0218 19:46:57.607269 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://f3b543e6ec63bdf78c858f95870e024438d65d986dd0f72b674fc74756af06be" gracePeriod=600 Feb 18 19:46:58 crc kubenswrapper[4932]: I0218 19:46:58.462869 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="f3b543e6ec63bdf78c858f95870e024438d65d986dd0f72b674fc74756af06be" exitCode=0 Feb 18 19:46:58 crc kubenswrapper[4932]: I0218 19:46:58.462940 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"f3b543e6ec63bdf78c858f95870e024438d65d986dd0f72b674fc74756af06be"} Feb 18 19:46:58 crc kubenswrapper[4932]: I0218 19:46:58.463299 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"0796b82991176676a1533452d61ed93202733b7f85192cab295504d343f7c992"} Feb 18 19:46:58 crc kubenswrapper[4932]: I0218 19:46:58.463331 4932 scope.go:117] "RemoveContainer" containerID="4ae81c5d1f59105a46f72ab1a12573d6a2070dbba30970140847e1ed2a0ce08d" Feb 18 19:46:59 crc kubenswrapper[4932]: I0218 19:46:59.443287 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-brmkq"] Feb 18 19:46:59 crc kubenswrapper[4932]: I0218 19:46:59.446308 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-brmkq" Feb 18 19:46:59 crc kubenswrapper[4932]: I0218 19:46:59.454465 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-brmkq"] Feb 18 19:46:59 crc kubenswrapper[4932]: I0218 19:46:59.492298 4932 generic.go:334] "Generic (PLEG): container finished" podID="0698d2a5-118e-4c2b-8325-875aab6bdc97" containerID="74eb47cb548078d77d4e3d87c2d001faf8bfea77fb446fed977ebb6f7bac086a" exitCode=0 Feb 18 19:46:59 crc kubenswrapper[4932]: I0218 19:46:59.492355 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" event={"ID":"0698d2a5-118e-4c2b-8325-875aab6bdc97","Type":"ContainerDied","Data":"74eb47cb548078d77d4e3d87c2d001faf8bfea77fb446fed977ebb6f7bac086a"} Feb 18 19:46:59 crc kubenswrapper[4932]: I0218 19:46:59.584518 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48955357-bac8-4bc1-80f8-939c59861c52-catalog-content\") pod \"redhat-operators-brmkq\" (UID: \"48955357-bac8-4bc1-80f8-939c59861c52\") " pod="openshift-marketplace/redhat-operators-brmkq" Feb 18 19:46:59 crc kubenswrapper[4932]: I0218 19:46:59.584659 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn2n2\" (UniqueName: \"kubernetes.io/projected/48955357-bac8-4bc1-80f8-939c59861c52-kube-api-access-kn2n2\") pod \"redhat-operators-brmkq\" (UID: \"48955357-bac8-4bc1-80f8-939c59861c52\") " pod="openshift-marketplace/redhat-operators-brmkq" Feb 18 19:46:59 crc kubenswrapper[4932]: I0218 19:46:59.584698 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48955357-bac8-4bc1-80f8-939c59861c52-utilities\") pod \"redhat-operators-brmkq\" (UID: \"48955357-bac8-4bc1-80f8-939c59861c52\") " pod="openshift-marketplace/redhat-operators-brmkq" Feb 18 19:46:59 crc kubenswrapper[4932]: I0218 19:46:59.685277 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn2n2\" (UniqueName: \"kubernetes.io/projected/48955357-bac8-4bc1-80f8-939c59861c52-kube-api-access-kn2n2\") pod \"redhat-operators-brmkq\" (UID: \"48955357-bac8-4bc1-80f8-939c59861c52\") " pod="openshift-marketplace/redhat-operators-brmkq" Feb 18 19:46:59 crc kubenswrapper[4932]: I0218 19:46:59.685323 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48955357-bac8-4bc1-80f8-939c59861c52-utilities\") pod \"redhat-operators-brmkq\" (UID: \"48955357-bac8-4bc1-80f8-939c59861c52\") " pod="openshift-marketplace/redhat-operators-brmkq" Feb 18 19:46:59 crc kubenswrapper[4932]: I0218 19:46:59.685380 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48955357-bac8-4bc1-80f8-939c59861c52-catalog-content\") pod \"redhat-operators-brmkq\" (UID: \"48955357-bac8-4bc1-80f8-939c59861c52\") " pod="openshift-marketplace/redhat-operators-brmkq" Feb 18 19:46:59 crc kubenswrapper[4932]: I0218 19:46:59.685823 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48955357-bac8-4bc1-80f8-939c59861c52-catalog-content\") pod \"redhat-operators-brmkq\" (UID: \"48955357-bac8-4bc1-80f8-939c59861c52\") " pod="openshift-marketplace/redhat-operators-brmkq" Feb 18 19:46:59 crc kubenswrapper[4932]: I0218 19:46:59.685950 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48955357-bac8-4bc1-80f8-939c59861c52-utilities\") pod \"redhat-operators-brmkq\" (UID: \"48955357-bac8-4bc1-80f8-939c59861c52\") " pod="openshift-marketplace/redhat-operators-brmkq" Feb 18 19:46:59 crc kubenswrapper[4932]: I0218 19:46:59.706840 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn2n2\" (UniqueName: \"kubernetes.io/projected/48955357-bac8-4bc1-80f8-939c59861c52-kube-api-access-kn2n2\") pod \"redhat-operators-brmkq\" (UID: \"48955357-bac8-4bc1-80f8-939c59861c52\") " pod="openshift-marketplace/redhat-operators-brmkq" Feb 18 19:46:59 crc kubenswrapper[4932]: I0218 19:46:59.769205 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-brmkq" Feb 18 19:46:59 crc kubenswrapper[4932]: I0218 19:46:59.983075 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-brmkq"] Feb 18 19:47:00 crc kubenswrapper[4932]: I0218 19:47:00.506324 4932 generic.go:334] "Generic (PLEG): container finished" podID="0698d2a5-118e-4c2b-8325-875aab6bdc97" containerID="f63cbdb6097a0bca0c288a3476870c08a7ee4283f8925a22c928513fd0acda40" exitCode=0 Feb 18 19:47:00 crc kubenswrapper[4932]: I0218 19:47:00.506426 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" event={"ID":"0698d2a5-118e-4c2b-8325-875aab6bdc97","Type":"ContainerDied","Data":"f63cbdb6097a0bca0c288a3476870c08a7ee4283f8925a22c928513fd0acda40"} Feb 18 19:47:00 crc kubenswrapper[4932]: I0218 19:47:00.507925 4932 generic.go:334] "Generic (PLEG): container finished" podID="48955357-bac8-4bc1-80f8-939c59861c52" containerID="297851fa0e23edca20383a70a1a308b0693ebe352c76ce53a3ade9506c01c89a" exitCode=0 Feb 18 19:47:00 crc kubenswrapper[4932]: I0218 19:47:00.507955 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brmkq" event={"ID":"48955357-bac8-4bc1-80f8-939c59861c52","Type":"ContainerDied","Data":"297851fa0e23edca20383a70a1a308b0693ebe352c76ce53a3ade9506c01c89a"} Feb 18 19:47:00 crc kubenswrapper[4932]: I0218 19:47:00.507974 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brmkq" event={"ID":"48955357-bac8-4bc1-80f8-939c59861c52","Type":"ContainerStarted","Data":"d674cd5ee7e951ca323d8c8395fb5893fb353533a055e7722bd24a4a5c045733"} Feb 18 19:47:01 crc kubenswrapper[4932]: I0218 19:47:01.514134 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brmkq" event={"ID":"48955357-bac8-4bc1-80f8-939c59861c52","Type":"ContainerStarted","Data":"da0114a0a1163416b252518f5f7b35cd4051a8a02afdb5d42f047eeb519dcee4"} Feb 18 19:47:01 crc kubenswrapper[4932]: I0218 19:47:01.777323 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" Feb 18 19:47:01 crc kubenswrapper[4932]: I0218 19:47:01.820752 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0698d2a5-118e-4c2b-8325-875aab6bdc97-bundle\") pod \"0698d2a5-118e-4c2b-8325-875aab6bdc97\" (UID: \"0698d2a5-118e-4c2b-8325-875aab6bdc97\") " Feb 18 19:47:01 crc kubenswrapper[4932]: I0218 19:47:01.821408 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwr4n\" (UniqueName: \"kubernetes.io/projected/0698d2a5-118e-4c2b-8325-875aab6bdc97-kube-api-access-jwr4n\") pod \"0698d2a5-118e-4c2b-8325-875aab6bdc97\" (UID: \"0698d2a5-118e-4c2b-8325-875aab6bdc97\") " Feb 18 19:47:01 crc kubenswrapper[4932]: I0218 19:47:01.821568 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0698d2a5-118e-4c2b-8325-875aab6bdc97-util\") pod \"0698d2a5-118e-4c2b-8325-875aab6bdc97\" (UID: \"0698d2a5-118e-4c2b-8325-875aab6bdc97\") " Feb 18 19:47:01 crc kubenswrapper[4932]: I0218 19:47:01.825868 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0698d2a5-118e-4c2b-8325-875aab6bdc97-bundle" (OuterVolumeSpecName: "bundle") pod "0698d2a5-118e-4c2b-8325-875aab6bdc97" (UID: "0698d2a5-118e-4c2b-8325-875aab6bdc97"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:47:01 crc kubenswrapper[4932]: I0218 19:47:01.832475 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0698d2a5-118e-4c2b-8325-875aab6bdc97-kube-api-access-jwr4n" (OuterVolumeSpecName: "kube-api-access-jwr4n") pod "0698d2a5-118e-4c2b-8325-875aab6bdc97" (UID: "0698d2a5-118e-4c2b-8325-875aab6bdc97"). InnerVolumeSpecName "kube-api-access-jwr4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:47:01 crc kubenswrapper[4932]: I0218 19:47:01.835106 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0698d2a5-118e-4c2b-8325-875aab6bdc97-util" (OuterVolumeSpecName: "util") pod "0698d2a5-118e-4c2b-8325-875aab6bdc97" (UID: "0698d2a5-118e-4c2b-8325-875aab6bdc97"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:47:01 crc kubenswrapper[4932]: I0218 19:47:01.923635 4932 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0698d2a5-118e-4c2b-8325-875aab6bdc97-util\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:01 crc kubenswrapper[4932]: I0218 19:47:01.923860 4932 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0698d2a5-118e-4c2b-8325-875aab6bdc97-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:01 crc kubenswrapper[4932]: I0218 19:47:01.923937 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwr4n\" (UniqueName: \"kubernetes.io/projected/0698d2a5-118e-4c2b-8325-875aab6bdc97-kube-api-access-jwr4n\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:02 crc kubenswrapper[4932]: I0218 19:47:02.527297 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" event={"ID":"0698d2a5-118e-4c2b-8325-875aab6bdc97","Type":"ContainerDied","Data":"14e8d6b9c5e954c4f70f81d07cfbf0c8db1088da3b9a2ce650a28b6a0a97a38e"} Feb 18 19:47:02 crc kubenswrapper[4932]: I0218 19:47:02.527612 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14e8d6b9c5e954c4f70f81d07cfbf0c8db1088da3b9a2ce650a28b6a0a97a38e" Feb 18 19:47:02 crc kubenswrapper[4932]: I0218 19:47:02.527748 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5nr5b" Feb 18 19:47:02 crc kubenswrapper[4932]: I0218 19:47:02.530872 4932 generic.go:334] "Generic (PLEG): container finished" podID="48955357-bac8-4bc1-80f8-939c59861c52" containerID="da0114a0a1163416b252518f5f7b35cd4051a8a02afdb5d42f047eeb519dcee4" exitCode=0 Feb 18 19:47:02 crc kubenswrapper[4932]: I0218 19:47:02.530968 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brmkq" event={"ID":"48955357-bac8-4bc1-80f8-939c59861c52","Type":"ContainerDied","Data":"da0114a0a1163416b252518f5f7b35cd4051a8a02afdb5d42f047eeb519dcee4"} Feb 18 19:47:03 crc kubenswrapper[4932]: I0218 19:47:03.543116 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brmkq" event={"ID":"48955357-bac8-4bc1-80f8-939c59861c52","Type":"ContainerStarted","Data":"06c72665858ea405b5bdd8f16cf7d3063d79f619bef00c595414e4690a8411ee"} Feb 18 19:47:03 crc kubenswrapper[4932]: I0218 19:47:03.569284 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-brmkq" podStartSLOduration=2.11760456 podStartE2EDuration="4.569268793s" podCreationTimestamp="2026-02-18 19:46:59 +0000 UTC" firstStartedPulling="2026-02-18 19:47:00.509474138 +0000 UTC m=+784.091428983" lastFinishedPulling="2026-02-18 19:47:02.961138331 +0000 UTC m=+786.543093216" observedRunningTime="2026-02-18 19:47:03.568626297 +0000 UTC m=+787.150581182" watchObservedRunningTime="2026-02-18 19:47:03.569268793 +0000 UTC m=+787.151223638" Feb 18 19:47:03 crc kubenswrapper[4932]: I0218 19:47:03.813095 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-khmxv" Feb 18 19:47:03 crc kubenswrapper[4932]: I0218 19:47:03.813195 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-khmxv" Feb 18 19:47:03 crc kubenswrapper[4932]: I0218 19:47:03.888263 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-khmxv" Feb 18 19:47:04 crc kubenswrapper[4932]: I0218 19:47:04.615147 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-khmxv" Feb 18 19:47:04 crc kubenswrapper[4932]: I0218 19:47:04.771947 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-rlwc2"] Feb 18 19:47:04 crc kubenswrapper[4932]: E0218 19:47:04.772216 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0698d2a5-118e-4c2b-8325-875aab6bdc97" containerName="extract" Feb 18 19:47:04 crc kubenswrapper[4932]: I0218 19:47:04.772235 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="0698d2a5-118e-4c2b-8325-875aab6bdc97" containerName="extract" Feb 18 19:47:04 crc kubenswrapper[4932]: E0218 19:47:04.772250 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0698d2a5-118e-4c2b-8325-875aab6bdc97" containerName="util" Feb 18 19:47:04 crc kubenswrapper[4932]: I0218 19:47:04.772258 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="0698d2a5-118e-4c2b-8325-875aab6bdc97" containerName="util" Feb 18 19:47:04 crc kubenswrapper[4932]: E0218 19:47:04.772271 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0698d2a5-118e-4c2b-8325-875aab6bdc97" containerName="pull" Feb 18 19:47:04 crc kubenswrapper[4932]: I0218 19:47:04.772279 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="0698d2a5-118e-4c2b-8325-875aab6bdc97" containerName="pull" Feb 18 19:47:04 crc kubenswrapper[4932]: I0218 19:47:04.772413 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="0698d2a5-118e-4c2b-8325-875aab6bdc97" containerName="extract" Feb 18 19:47:04 crc kubenswrapper[4932]: I0218 19:47:04.772876 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-rlwc2" Feb 18 19:47:04 crc kubenswrapper[4932]: I0218 19:47:04.774503 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-h4lf6" Feb 18 19:47:04 crc kubenswrapper[4932]: I0218 19:47:04.775311 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 18 19:47:04 crc kubenswrapper[4932]: I0218 19:47:04.775849 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 18 19:47:04 crc kubenswrapper[4932]: I0218 19:47:04.780887 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-rlwc2"] Feb 18 19:47:04 crc kubenswrapper[4932]: I0218 19:47:04.963149 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrnpt\" (UniqueName: \"kubernetes.io/projected/077cce1b-0169-481c-adf2-3d0536d1c943-kube-api-access-xrnpt\") pod \"nmstate-operator-694c9596b7-rlwc2\" (UID: \"077cce1b-0169-481c-adf2-3d0536d1c943\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-rlwc2" Feb 18 19:47:05 crc kubenswrapper[4932]: I0218 19:47:05.064254 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrnpt\" (UniqueName: \"kubernetes.io/projected/077cce1b-0169-481c-adf2-3d0536d1c943-kube-api-access-xrnpt\") pod \"nmstate-operator-694c9596b7-rlwc2\" (UID: \"077cce1b-0169-481c-adf2-3d0536d1c943\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-rlwc2" Feb 18 19:47:05 crc kubenswrapper[4932]: I0218 19:47:05.097362 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrnpt\" (UniqueName: \"kubernetes.io/projected/077cce1b-0169-481c-adf2-3d0536d1c943-kube-api-access-xrnpt\") pod \"nmstate-operator-694c9596b7-rlwc2\" (UID: \"077cce1b-0169-481c-adf2-3d0536d1c943\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-rlwc2" Feb 18 19:47:05 crc kubenswrapper[4932]: I0218 19:47:05.391688 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-rlwc2" Feb 18 19:47:05 crc kubenswrapper[4932]: I0218 19:47:05.595017 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-rlwc2"] Feb 18 19:47:06 crc kubenswrapper[4932]: I0218 19:47:06.567368 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-rlwc2" event={"ID":"077cce1b-0169-481c-adf2-3d0536d1c943","Type":"ContainerStarted","Data":"9036f1f8edf5dcf4d85cbb3331dfdb47f771b2dc1ad3908d80b077b1fd5a733d"} Feb 18 19:47:07 crc kubenswrapper[4932]: I0218 19:47:07.643311 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-khmxv"] Feb 18 19:47:07 crc kubenswrapper[4932]: I0218 19:47:07.643865 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-khmxv" podUID="a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6" containerName="registry-server" containerID="cri-o://07618326fa18de234030563d641197cae3e4d2f25e999e02de6190d757598673" gracePeriod=2 Feb 18 19:47:09 crc kubenswrapper[4932]: I0218 19:47:09.769606 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-brmkq" Feb 18 19:47:09 crc kubenswrapper[4932]: I0218 19:47:09.769959 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-brmkq" Feb 18 19:47:09 crc kubenswrapper[4932]: I0218 19:47:09.839010 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-brmkq" Feb 18 19:47:10 crc kubenswrapper[4932]: I0218 19:47:10.601056 4932 generic.go:334] "Generic (PLEG): container finished" podID="a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6" containerID="07618326fa18de234030563d641197cae3e4d2f25e999e02de6190d757598673" exitCode=0 Feb 18 19:47:10 crc kubenswrapper[4932]: I0218 19:47:10.601860 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khmxv" event={"ID":"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6","Type":"ContainerDied","Data":"07618326fa18de234030563d641197cae3e4d2f25e999e02de6190d757598673"} Feb 18 19:47:10 crc kubenswrapper[4932]: I0218 19:47:10.644016 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-brmkq" Feb 18 19:47:10 crc kubenswrapper[4932]: I0218 19:47:10.785252 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khmxv" Feb 18 19:47:10 crc kubenswrapper[4932]: I0218 19:47:10.843263 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-utilities\") pod \"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6\" (UID: \"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6\") " Feb 18 19:47:10 crc kubenswrapper[4932]: I0218 19:47:10.843342 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-catalog-content\") pod \"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6\" (UID: \"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6\") " Feb 18 19:47:10 crc kubenswrapper[4932]: I0218 19:47:10.843373 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrxgd\" (UniqueName: \"kubernetes.io/projected/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-kube-api-access-vrxgd\") pod \"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6\" (UID: \"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6\") " Feb 18 19:47:10 crc kubenswrapper[4932]: I0218 19:47:10.844115 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-utilities" (OuterVolumeSpecName: "utilities") pod "a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6" (UID: "a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:47:10 crc kubenswrapper[4932]: I0218 19:47:10.852300 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-kube-api-access-vrxgd" (OuterVolumeSpecName: "kube-api-access-vrxgd") pod "a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6" (UID: "a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6"). InnerVolumeSpecName "kube-api-access-vrxgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:47:10 crc kubenswrapper[4932]: I0218 19:47:10.895025 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6" (UID: "a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:47:10 crc kubenswrapper[4932]: I0218 19:47:10.945227 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:10 crc kubenswrapper[4932]: I0218 19:47:10.945256 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:10 crc kubenswrapper[4932]: I0218 19:47:10.945267 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrxgd\" (UniqueName: \"kubernetes.io/projected/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6-kube-api-access-vrxgd\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:11 crc kubenswrapper[4932]: I0218 19:47:11.609697 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-rlwc2" event={"ID":"077cce1b-0169-481c-adf2-3d0536d1c943","Type":"ContainerStarted","Data":"255b98631482fd40b1d833f4ee662645c35802d56c36c72ec267be78709aa9ec"} Feb 18 19:47:11 crc kubenswrapper[4932]: I0218 19:47:11.613252 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khmxv" Feb 18 19:47:11 crc kubenswrapper[4932]: I0218 19:47:11.613825 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khmxv" event={"ID":"a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6","Type":"ContainerDied","Data":"d2d248bc96ff686741bc9d18c111e029f66e7b55a12534b4aee09014a335d602"} Feb 18 19:47:11 crc kubenswrapper[4932]: I0218 19:47:11.613865 4932 scope.go:117] "RemoveContainer" containerID="07618326fa18de234030563d641197cae3e4d2f25e999e02de6190d757598673" Feb 18 19:47:11 crc kubenswrapper[4932]: I0218 19:47:11.632210 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-rlwc2" podStartSLOduration=1.873537918 podStartE2EDuration="7.632195151s" podCreationTimestamp="2026-02-18 19:47:04 +0000 UTC" firstStartedPulling="2026-02-18 19:47:05.604275775 +0000 UTC m=+789.186230620" lastFinishedPulling="2026-02-18 19:47:11.362932998 +0000 UTC m=+794.944887853" observedRunningTime="2026-02-18 19:47:11.631170216 +0000 UTC m=+795.213125071" watchObservedRunningTime="2026-02-18 19:47:11.632195151 +0000 UTC m=+795.214149996" Feb 18 19:47:11 crc kubenswrapper[4932]: I0218 19:47:11.645547 4932 scope.go:117] "RemoveContainer" containerID="e88a66981dc355681048566540ba21d2b80b46a376a5c1da04f5047ddb9643ed" Feb 18 19:47:11 crc kubenswrapper[4932]: I0218 19:47:11.648589 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-khmxv"] Feb 18 19:47:11 crc kubenswrapper[4932]: I0218 19:47:11.652579 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-khmxv"] Feb 18 19:47:11 crc kubenswrapper[4932]: I0218 19:47:11.667323 4932 scope.go:117] "RemoveContainer" containerID="4e27e9c63915178a1013148fd8c27d21f8d6ff07ebec24244d082f831b9b799a" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.641921 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-mtshd"] Feb 18 19:47:12 crc kubenswrapper[4932]: E0218 19:47:12.643765 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6" containerName="extract-utilities" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.643934 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6" containerName="extract-utilities" Feb 18 19:47:12 crc kubenswrapper[4932]: E0218 19:47:12.644049 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6" containerName="extract-content" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.644157 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6" containerName="extract-content" Feb 18 19:47:12 crc kubenswrapper[4932]: E0218 19:47:12.644334 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6" containerName="registry-server" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.644435 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6" containerName="registry-server" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.644754 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6" containerName="registry-server" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.645828 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-mtshd" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.649672 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-pzp9s" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.659782 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-bm8xm"] Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.661615 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bm8xm" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.665695 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.667815 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-mtshd"] Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.685109 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-dktbf"] Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.686231 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-dktbf" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.700019 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-bm8xm"] Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.765220 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj96x\" (UniqueName: \"kubernetes.io/projected/df77ffd7-3518-44da-b978-578bfb225ede-kube-api-access-mj96x\") pod \"nmstate-handler-dktbf\" (UID: \"df77ffd7-3518-44da-b978-578bfb225ede\") " pod="openshift-nmstate/nmstate-handler-dktbf" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.765275 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/df77ffd7-3518-44da-b978-578bfb225ede-dbus-socket\") pod \"nmstate-handler-dktbf\" (UID: \"df77ffd7-3518-44da-b978-578bfb225ede\") " pod="openshift-nmstate/nmstate-handler-dktbf" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.765303 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms4gm\" (UniqueName: \"kubernetes.io/projected/c87a209a-d7a2-4615-87b9-e8c9ec5a8b91-kube-api-access-ms4gm\") pod \"nmstate-metrics-58c85c668d-mtshd\" (UID: \"c87a209a-d7a2-4615-87b9-e8c9ec5a8b91\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-mtshd" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.765320 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3dc40b19-6391-4590-9ffd-820e3e865431-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-bm8xm\" (UID: \"3dc40b19-6391-4590-9ffd-820e3e865431\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bm8xm" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.765335 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/df77ffd7-3518-44da-b978-578bfb225ede-nmstate-lock\") pod \"nmstate-handler-dktbf\" (UID: \"df77ffd7-3518-44da-b978-578bfb225ede\") " pod="openshift-nmstate/nmstate-handler-dktbf" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.765349 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/df77ffd7-3518-44da-b978-578bfb225ede-ovs-socket\") pod \"nmstate-handler-dktbf\" (UID: \"df77ffd7-3518-44da-b978-578bfb225ede\") " pod="openshift-nmstate/nmstate-handler-dktbf" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.765421 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j58rj\" (UniqueName: \"kubernetes.io/projected/3dc40b19-6391-4590-9ffd-820e3e865431-kube-api-access-j58rj\") pod \"nmstate-webhook-866bcb46dc-bm8xm\" (UID: \"3dc40b19-6391-4590-9ffd-820e3e865431\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bm8xm" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.784998 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-pr4hl"] Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.785656 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-pr4hl" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.788928 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.789143 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-hzhgz" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.789270 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.799523 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-pr4hl"] Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.866476 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j58rj\" (UniqueName: \"kubernetes.io/projected/3dc40b19-6391-4590-9ffd-820e3e865431-kube-api-access-j58rj\") pod \"nmstate-webhook-866bcb46dc-bm8xm\" (UID: \"3dc40b19-6391-4590-9ffd-820e3e865431\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bm8xm" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.866540 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckntg\" (UniqueName: \"kubernetes.io/projected/2cca088e-edc8-4ce7-9ce4-e0561b2576e3-kube-api-access-ckntg\") pod \"nmstate-console-plugin-5c78fc5d65-pr4hl\" (UID: \"2cca088e-edc8-4ce7-9ce4-e0561b2576e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-pr4hl" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.866564 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2cca088e-edc8-4ce7-9ce4-e0561b2576e3-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-pr4hl\" (UID: \"2cca088e-edc8-4ce7-9ce4-e0561b2576e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-pr4hl" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.866586 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj96x\" (UniqueName: \"kubernetes.io/projected/df77ffd7-3518-44da-b978-578bfb225ede-kube-api-access-mj96x\") pod \"nmstate-handler-dktbf\" (UID: \"df77ffd7-3518-44da-b978-578bfb225ede\") " pod="openshift-nmstate/nmstate-handler-dktbf" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.866609 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/df77ffd7-3518-44da-b978-578bfb225ede-dbus-socket\") pod \"nmstate-handler-dktbf\" (UID: \"df77ffd7-3518-44da-b978-578bfb225ede\") " pod="openshift-nmstate/nmstate-handler-dktbf" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.866634 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms4gm\" (UniqueName: \"kubernetes.io/projected/c87a209a-d7a2-4615-87b9-e8c9ec5a8b91-kube-api-access-ms4gm\") pod \"nmstate-metrics-58c85c668d-mtshd\" (UID: \"c87a209a-d7a2-4615-87b9-e8c9ec5a8b91\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-mtshd" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.866647 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3dc40b19-6391-4590-9ffd-820e3e865431-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-bm8xm\" (UID: \"3dc40b19-6391-4590-9ffd-820e3e865431\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bm8xm" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.866662 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2cca088e-edc8-4ce7-9ce4-e0561b2576e3-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-pr4hl\" (UID: \"2cca088e-edc8-4ce7-9ce4-e0561b2576e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-pr4hl" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.866681 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/df77ffd7-3518-44da-b978-578bfb225ede-nmstate-lock\") pod \"nmstate-handler-dktbf\" (UID: \"df77ffd7-3518-44da-b978-578bfb225ede\") " pod="openshift-nmstate/nmstate-handler-dktbf" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.866695 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/df77ffd7-3518-44da-b978-578bfb225ede-ovs-socket\") pod \"nmstate-handler-dktbf\" (UID: \"df77ffd7-3518-44da-b978-578bfb225ede\") " pod="openshift-nmstate/nmstate-handler-dktbf" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.866772 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/df77ffd7-3518-44da-b978-578bfb225ede-ovs-socket\") pod \"nmstate-handler-dktbf\" (UID: \"df77ffd7-3518-44da-b978-578bfb225ede\") " pod="openshift-nmstate/nmstate-handler-dktbf" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.867399 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/df77ffd7-3518-44da-b978-578bfb225ede-dbus-socket\") pod \"nmstate-handler-dktbf\" (UID: \"df77ffd7-3518-44da-b978-578bfb225ede\") " pod="openshift-nmstate/nmstate-handler-dktbf" Feb 18 19:47:12 crc kubenswrapper[4932]: E0218 19:47:12.867574 4932 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 18 19:47:12 crc kubenswrapper[4932]: E0218 19:47:12.867626 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3dc40b19-6391-4590-9ffd-820e3e865431-tls-key-pair podName:3dc40b19-6391-4590-9ffd-820e3e865431 nodeName:}" failed. No retries permitted until 2026-02-18 19:47:13.367601028 +0000 UTC m=+796.949555873 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/3dc40b19-6391-4590-9ffd-820e3e865431-tls-key-pair") pod "nmstate-webhook-866bcb46dc-bm8xm" (UID: "3dc40b19-6391-4590-9ffd-820e3e865431") : secret "openshift-nmstate-webhook" not found Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.867747 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/df77ffd7-3518-44da-b978-578bfb225ede-nmstate-lock\") pod \"nmstate-handler-dktbf\" (UID: \"df77ffd7-3518-44da-b978-578bfb225ede\") " pod="openshift-nmstate/nmstate-handler-dktbf" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.902023 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms4gm\" (UniqueName: \"kubernetes.io/projected/c87a209a-d7a2-4615-87b9-e8c9ec5a8b91-kube-api-access-ms4gm\") pod \"nmstate-metrics-58c85c668d-mtshd\" (UID: \"c87a209a-d7a2-4615-87b9-e8c9ec5a8b91\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-mtshd" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.918024 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j58rj\" (UniqueName: \"kubernetes.io/projected/3dc40b19-6391-4590-9ffd-820e3e865431-kube-api-access-j58rj\") pod \"nmstate-webhook-866bcb46dc-bm8xm\" (UID: \"3dc40b19-6391-4590-9ffd-820e3e865431\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bm8xm" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.931822 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj96x\" (UniqueName: \"kubernetes.io/projected/df77ffd7-3518-44da-b978-578bfb225ede-kube-api-access-mj96x\") pod \"nmstate-handler-dktbf\" (UID: \"df77ffd7-3518-44da-b978-578bfb225ede\") " pod="openshift-nmstate/nmstate-handler-dktbf" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.968215 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2cca088e-edc8-4ce7-9ce4-e0561b2576e3-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-pr4hl\" (UID: \"2cca088e-edc8-4ce7-9ce4-e0561b2576e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-pr4hl" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.968319 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2cca088e-edc8-4ce7-9ce4-e0561b2576e3-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-pr4hl\" (UID: \"2cca088e-edc8-4ce7-9ce4-e0561b2576e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-pr4hl" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.968340 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckntg\" (UniqueName: \"kubernetes.io/projected/2cca088e-edc8-4ce7-9ce4-e0561b2576e3-kube-api-access-ckntg\") pod \"nmstate-console-plugin-5c78fc5d65-pr4hl\" (UID: \"2cca088e-edc8-4ce7-9ce4-e0561b2576e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-pr4hl" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.969971 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2cca088e-edc8-4ce7-9ce4-e0561b2576e3-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-pr4hl\" (UID: \"2cca088e-edc8-4ce7-9ce4-e0561b2576e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-pr4hl" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.970091 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-mtshd" Feb 18 19:47:12 crc kubenswrapper[4932]: I0218 19:47:12.972527 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2cca088e-edc8-4ce7-9ce4-e0561b2576e3-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-pr4hl\" (UID: \"2cca088e-edc8-4ce7-9ce4-e0561b2576e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-pr4hl" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.014598 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-dktbf" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.015001 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckntg\" (UniqueName: \"kubernetes.io/projected/2cca088e-edc8-4ce7-9ce4-e0561b2576e3-kube-api-access-ckntg\") pod \"nmstate-console-plugin-5c78fc5d65-pr4hl\" (UID: \"2cca088e-edc8-4ce7-9ce4-e0561b2576e3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-pr4hl" Feb 18 19:47:13 crc kubenswrapper[4932]: W0218 19:47:13.071599 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf77ffd7_3518_44da_b978_578bfb225ede.slice/crio-0ea5b4d536e26104daddea7f13d102f6be73a7f45309ea8bea561dbaae21ef15 WatchSource:0}: Error finding container 0ea5b4d536e26104daddea7f13d102f6be73a7f45309ea8bea561dbaae21ef15: Status 404 returned error can't find the container with id 0ea5b4d536e26104daddea7f13d102f6be73a7f45309ea8bea561dbaae21ef15 Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.103599 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-pr4hl" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.154964 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-69545748b6-t8skh"] Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.156016 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.174905 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69545748b6-t8skh"] Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.203132 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6" path="/var/lib/kubelet/pods/a8af6ecc-78ff-4d7f-9d4b-9ac78e81b7f6/volumes" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.235899 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-mtshd"] Feb 18 19:47:13 crc kubenswrapper[4932]: W0218 19:47:13.240799 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc87a209a_d7a2_4615_87b9_e8c9ec5a8b91.slice/crio-0a07372d4e55918d3ab8e5ba51af58879e9be5c5c0f133591b04d87def796c80 WatchSource:0}: Error finding container 0a07372d4e55918d3ab8e5ba51af58879e9be5c5c0f133591b04d87def796c80: Status 404 returned error can't find the container with id 0a07372d4e55918d3ab8e5ba51af58879e9be5c5c0f133591b04d87def796c80 Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.272517 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-console-config\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.272633 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-trusted-ca-bundle\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.272680 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-oauth-serving-cert\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.272709 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-console-serving-cert\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.272744 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qss7\" (UniqueName: \"kubernetes.io/projected/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-kube-api-access-2qss7\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.272848 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-service-ca\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.272935 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-console-oauth-config\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.363963 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-pr4hl"] Feb 18 19:47:13 crc kubenswrapper[4932]: W0218 19:47:13.371578 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cca088e_edc8_4ce7_9ce4_e0561b2576e3.slice/crio-70eedf3bdb90c3c4529a99cff98591e2423fd7ff508ae2f5b576fcd66340bb0d WatchSource:0}: Error finding container 70eedf3bdb90c3c4529a99cff98591e2423fd7ff508ae2f5b576fcd66340bb0d: Status 404 returned error can't find the container with id 70eedf3bdb90c3c4529a99cff98591e2423fd7ff508ae2f5b576fcd66340bb0d Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.373465 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qss7\" (UniqueName: \"kubernetes.io/projected/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-kube-api-access-2qss7\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.373523 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-service-ca\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.373565 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-console-oauth-config\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.373608 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-console-config\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.373654 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3dc40b19-6391-4590-9ffd-820e3e865431-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-bm8xm\" (UID: \"3dc40b19-6391-4590-9ffd-820e3e865431\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bm8xm" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.373678 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-trusted-ca-bundle\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.373705 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-oauth-serving-cert\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.373733 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-console-serving-cert\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.374635 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-console-config\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.374725 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-trusted-ca-bundle\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.374980 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-service-ca\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.375365 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-oauth-serving-cert\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.380726 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-console-serving-cert\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.380746 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3dc40b19-6391-4590-9ffd-820e3e865431-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-bm8xm\" (UID: \"3dc40b19-6391-4590-9ffd-820e3e865431\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bm8xm" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.381212 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-console-oauth-config\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.388542 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qss7\" (UniqueName: \"kubernetes.io/projected/3f7d8b2e-aed4-4db7-98ea-5226f18411a7-kube-api-access-2qss7\") pod \"console-69545748b6-t8skh\" (UID: \"3f7d8b2e-aed4-4db7-98ea-5226f18411a7\") " pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.439016 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-brmkq"] Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.439296 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-brmkq" podUID="48955357-bac8-4bc1-80f8-939c59861c52" containerName="registry-server" containerID="cri-o://06c72665858ea405b5bdd8f16cf7d3063d79f619bef00c595414e4690a8411ee" gracePeriod=2 Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.498951 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.585722 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bm8xm" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.632887 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-mtshd" event={"ID":"c87a209a-d7a2-4615-87b9-e8c9ec5a8b91","Type":"ContainerStarted","Data":"0a07372d4e55918d3ab8e5ba51af58879e9be5c5c0f133591b04d87def796c80"} Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.634997 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-pr4hl" event={"ID":"2cca088e-edc8-4ce7-9ce4-e0561b2576e3","Type":"ContainerStarted","Data":"70eedf3bdb90c3c4529a99cff98591e2423fd7ff508ae2f5b576fcd66340bb0d"} Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.637676 4932 generic.go:334] "Generic (PLEG): container finished" podID="48955357-bac8-4bc1-80f8-939c59861c52" containerID="06c72665858ea405b5bdd8f16cf7d3063d79f619bef00c595414e4690a8411ee" exitCode=0 Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.637740 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brmkq" event={"ID":"48955357-bac8-4bc1-80f8-939c59861c52","Type":"ContainerDied","Data":"06c72665858ea405b5bdd8f16cf7d3063d79f619bef00c595414e4690a8411ee"} Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.639022 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-dktbf" event={"ID":"df77ffd7-3518-44da-b978-578bfb225ede","Type":"ContainerStarted","Data":"0ea5b4d536e26104daddea7f13d102f6be73a7f45309ea8bea561dbaae21ef15"} Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.698437 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69545748b6-t8skh"] Feb 18 19:47:13 crc kubenswrapper[4932]: W0218 19:47:13.706453 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f7d8b2e_aed4_4db7_98ea_5226f18411a7.slice/crio-1c8c716c6c79b278f99098d817e770c6ef1e72145893a76dce99f943ac5aa51c WatchSource:0}: Error finding container 1c8c716c6c79b278f99098d817e770c6ef1e72145893a76dce99f943ac5aa51c: Status 404 returned error can't find the container with id 1c8c716c6c79b278f99098d817e770c6ef1e72145893a76dce99f943ac5aa51c Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.817003 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-brmkq" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.881098 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48955357-bac8-4bc1-80f8-939c59861c52-utilities\") pod \"48955357-bac8-4bc1-80f8-939c59861c52\" (UID: \"48955357-bac8-4bc1-80f8-939c59861c52\") " Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.881475 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn2n2\" (UniqueName: \"kubernetes.io/projected/48955357-bac8-4bc1-80f8-939c59861c52-kube-api-access-kn2n2\") pod \"48955357-bac8-4bc1-80f8-939c59861c52\" (UID: \"48955357-bac8-4bc1-80f8-939c59861c52\") " Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.881551 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48955357-bac8-4bc1-80f8-939c59861c52-catalog-content\") pod \"48955357-bac8-4bc1-80f8-939c59861c52\" (UID: \"48955357-bac8-4bc1-80f8-939c59861c52\") " Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.882346 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48955357-bac8-4bc1-80f8-939c59861c52-utilities" (OuterVolumeSpecName: "utilities") pod "48955357-bac8-4bc1-80f8-939c59861c52" (UID: "48955357-bac8-4bc1-80f8-939c59861c52"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.887377 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48955357-bac8-4bc1-80f8-939c59861c52-kube-api-access-kn2n2" (OuterVolumeSpecName: "kube-api-access-kn2n2") pod "48955357-bac8-4bc1-80f8-939c59861c52" (UID: "48955357-bac8-4bc1-80f8-939c59861c52"). InnerVolumeSpecName "kube-api-access-kn2n2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.982866 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kn2n2\" (UniqueName: \"kubernetes.io/projected/48955357-bac8-4bc1-80f8-939c59861c52-kube-api-access-kn2n2\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:13 crc kubenswrapper[4932]: I0218 19:47:13.982899 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48955357-bac8-4bc1-80f8-939c59861c52-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:14 crc kubenswrapper[4932]: I0218 19:47:14.016290 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-bm8xm"] Feb 18 19:47:14 crc kubenswrapper[4932]: I0218 19:47:14.018466 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48955357-bac8-4bc1-80f8-939c59861c52-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48955357-bac8-4bc1-80f8-939c59861c52" (UID: "48955357-bac8-4bc1-80f8-939c59861c52"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:47:14 crc kubenswrapper[4932]: W0218 19:47:14.024771 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dc40b19_6391_4590_9ffd_820e3e865431.slice/crio-c1df7aab5b0b0d9fc7f610a16632ba1d48edfcc9bac4a4371f6e9c4448e1d75c WatchSource:0}: Error finding container c1df7aab5b0b0d9fc7f610a16632ba1d48edfcc9bac4a4371f6e9c4448e1d75c: Status 404 returned error can't find the container with id c1df7aab5b0b0d9fc7f610a16632ba1d48edfcc9bac4a4371f6e9c4448e1d75c Feb 18 19:47:14 crc kubenswrapper[4932]: I0218 19:47:14.083987 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48955357-bac8-4bc1-80f8-939c59861c52-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:14 crc kubenswrapper[4932]: I0218 19:47:14.647355 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-brmkq" Feb 18 19:47:14 crc kubenswrapper[4932]: I0218 19:47:14.647355 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brmkq" event={"ID":"48955357-bac8-4bc1-80f8-939c59861c52","Type":"ContainerDied","Data":"d674cd5ee7e951ca323d8c8395fb5893fb353533a055e7722bd24a4a5c045733"} Feb 18 19:47:14 crc kubenswrapper[4932]: I0218 19:47:14.647799 4932 scope.go:117] "RemoveContainer" containerID="06c72665858ea405b5bdd8f16cf7d3063d79f619bef00c595414e4690a8411ee" Feb 18 19:47:14 crc kubenswrapper[4932]: I0218 19:47:14.648677 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69545748b6-t8skh" event={"ID":"3f7d8b2e-aed4-4db7-98ea-5226f18411a7","Type":"ContainerStarted","Data":"b7dc11f14fa6aebbc14ff387ecb9a2cb207823a3d53e2ca6bda864cb7b4b9e16"} Feb 18 19:47:14 crc kubenswrapper[4932]: I0218 19:47:14.648716 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69545748b6-t8skh" event={"ID":"3f7d8b2e-aed4-4db7-98ea-5226f18411a7","Type":"ContainerStarted","Data":"1c8c716c6c79b278f99098d817e770c6ef1e72145893a76dce99f943ac5aa51c"} Feb 18 19:47:14 crc kubenswrapper[4932]: I0218 19:47:14.649821 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bm8xm" event={"ID":"3dc40b19-6391-4590-9ffd-820e3e865431","Type":"ContainerStarted","Data":"c1df7aab5b0b0d9fc7f610a16632ba1d48edfcc9bac4a4371f6e9c4448e1d75c"} Feb 18 19:47:14 crc kubenswrapper[4932]: I0218 19:47:14.669066 4932 scope.go:117] "RemoveContainer" containerID="da0114a0a1163416b252518f5f7b35cd4051a8a02afdb5d42f047eeb519dcee4" Feb 18 19:47:14 crc kubenswrapper[4932]: I0218 19:47:14.679440 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-69545748b6-t8skh" podStartSLOduration=1.679406674 podStartE2EDuration="1.679406674s" podCreationTimestamp="2026-02-18 19:47:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:47:14.667710595 +0000 UTC m=+798.249665440" watchObservedRunningTime="2026-02-18 19:47:14.679406674 +0000 UTC m=+798.261361549" Feb 18 19:47:14 crc kubenswrapper[4932]: I0218 19:47:14.691143 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-brmkq"] Feb 18 19:47:14 crc kubenswrapper[4932]: I0218 19:47:14.695123 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-brmkq"] Feb 18 19:47:14 crc kubenswrapper[4932]: I0218 19:47:14.719106 4932 scope.go:117] "RemoveContainer" containerID="297851fa0e23edca20383a70a1a308b0693ebe352c76ce53a3ade9506c01c89a" Feb 18 19:47:15 crc kubenswrapper[4932]: I0218 19:47:15.185970 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48955357-bac8-4bc1-80f8-939c59861c52" path="/var/lib/kubelet/pods/48955357-bac8-4bc1-80f8-939c59861c52/volumes" Feb 18 19:47:16 crc kubenswrapper[4932]: I0218 19:47:16.691648 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bm8xm" event={"ID":"3dc40b19-6391-4590-9ffd-820e3e865431","Type":"ContainerStarted","Data":"ac1aa61dedb6411726ebb230a5b291e86bcaede72eb93235a8bd412981c79c96"} Feb 18 19:47:16 crc kubenswrapper[4932]: I0218 19:47:16.692119 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bm8xm" Feb 18 19:47:16 crc kubenswrapper[4932]: I0218 19:47:16.693258 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-dktbf" event={"ID":"df77ffd7-3518-44da-b978-578bfb225ede","Type":"ContainerStarted","Data":"a97b21848400f2c5eb4a9c371a397bde571296c3bc3efec1ee973727e41a264b"} Feb 18 19:47:16 crc kubenswrapper[4932]: I0218 19:47:16.693374 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-dktbf" Feb 18 19:47:16 crc kubenswrapper[4932]: I0218 19:47:16.695772 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-mtshd" event={"ID":"c87a209a-d7a2-4615-87b9-e8c9ec5a8b91","Type":"ContainerStarted","Data":"d459f3f8ae2e716ea8ba1253e78ec6c5c664043541e7d60da754ef158668de73"} Feb 18 19:47:16 crc kubenswrapper[4932]: I0218 19:47:16.721324 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bm8xm" podStartSLOduration=2.587090027 podStartE2EDuration="4.721293497s" podCreationTimestamp="2026-02-18 19:47:12 +0000 UTC" firstStartedPulling="2026-02-18 19:47:14.0278031 +0000 UTC m=+797.609757945" lastFinishedPulling="2026-02-18 19:47:16.16200656 +0000 UTC m=+799.743961415" observedRunningTime="2026-02-18 19:47:16.712812558 +0000 UTC m=+800.294767453" watchObservedRunningTime="2026-02-18 19:47:16.721293497 +0000 UTC m=+800.303248782" Feb 18 19:47:16 crc kubenswrapper[4932]: I0218 19:47:16.730369 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-dktbf" podStartSLOduration=1.687606878 podStartE2EDuration="4.730350451s" podCreationTimestamp="2026-02-18 19:47:12 +0000 UTC" firstStartedPulling="2026-02-18 19:47:13.073710433 +0000 UTC m=+796.655665278" lastFinishedPulling="2026-02-18 19:47:16.116454006 +0000 UTC m=+799.698408851" observedRunningTime="2026-02-18 19:47:16.730060653 +0000 UTC m=+800.312015508" watchObservedRunningTime="2026-02-18 19:47:16.730350451 +0000 UTC m=+800.312305306" Feb 18 19:47:17 crc kubenswrapper[4932]: I0218 19:47:17.705124 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-pr4hl" event={"ID":"2cca088e-edc8-4ce7-9ce4-e0561b2576e3","Type":"ContainerStarted","Data":"97252ad2b4caf0138455b3bf0e0dbc5147313ac1c51a11fe95e90b726143239b"} Feb 18 19:47:17 crc kubenswrapper[4932]: I0218 19:47:17.729910 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-pr4hl" podStartSLOduration=1.885844648 podStartE2EDuration="5.729889619s" podCreationTimestamp="2026-02-18 19:47:12 +0000 UTC" firstStartedPulling="2026-02-18 19:47:13.373299654 +0000 UTC m=+796.955254499" lastFinishedPulling="2026-02-18 19:47:17.217344625 +0000 UTC m=+800.799299470" observedRunningTime="2026-02-18 19:47:17.727240204 +0000 UTC m=+801.309195059" watchObservedRunningTime="2026-02-18 19:47:17.729889619 +0000 UTC m=+801.311844504" Feb 18 19:47:18 crc kubenswrapper[4932]: I0218 19:47:18.715789 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-mtshd" event={"ID":"c87a209a-d7a2-4615-87b9-e8c9ec5a8b91","Type":"ContainerStarted","Data":"e11ac389b1643b96b712d27ac370f21963d6190bb0419ed092d0d691bf04c80d"} Feb 18 19:47:23 crc kubenswrapper[4932]: I0218 19:47:23.052156 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-dktbf" Feb 18 19:47:23 crc kubenswrapper[4932]: I0218 19:47:23.090118 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-mtshd" podStartSLOduration=5.948705127 podStartE2EDuration="11.090087002s" podCreationTimestamp="2026-02-18 19:47:12 +0000 UTC" firstStartedPulling="2026-02-18 19:47:13.243627375 +0000 UTC m=+796.825582220" lastFinishedPulling="2026-02-18 19:47:18.38500925 +0000 UTC m=+801.966964095" observedRunningTime="2026-02-18 19:47:18.743628927 +0000 UTC m=+802.325583802" watchObservedRunningTime="2026-02-18 19:47:23.090087002 +0000 UTC m=+806.672041917" Feb 18 19:47:23 crc kubenswrapper[4932]: I0218 19:47:23.499484 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:23 crc kubenswrapper[4932]: I0218 19:47:23.500499 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:23 crc kubenswrapper[4932]: I0218 19:47:23.509703 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:23 crc kubenswrapper[4932]: I0218 19:47:23.761717 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-69545748b6-t8skh" Feb 18 19:47:23 crc kubenswrapper[4932]: I0218 19:47:23.848135 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-fgjll"] Feb 18 19:47:33 crc kubenswrapper[4932]: I0218 19:47:33.591828 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bm8xm" Feb 18 19:47:48 crc kubenswrapper[4932]: I0218 19:47:48.893956 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-fgjll" podUID="f9f46b79-f300-42de-a2c3-a35670822a3b" containerName="console" containerID="cri-o://fe1dbe69549de48e45fb41bc63a2fced32334e7eea18ca7b7aa834f59f93d40c" gracePeriod=15 Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.319110 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5"] Feb 18 19:47:49 crc kubenswrapper[4932]: E0218 19:47:49.319731 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48955357-bac8-4bc1-80f8-939c59861c52" containerName="extract-content" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.319747 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="48955357-bac8-4bc1-80f8-939c59861c52" containerName="extract-content" Feb 18 19:47:49 crc kubenswrapper[4932]: E0218 19:47:49.319768 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48955357-bac8-4bc1-80f8-939c59861c52" containerName="extract-utilities" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.319777 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="48955357-bac8-4bc1-80f8-939c59861c52" containerName="extract-utilities" Feb 18 19:47:49 crc kubenswrapper[4932]: E0218 19:47:49.319792 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48955357-bac8-4bc1-80f8-939c59861c52" containerName="registry-server" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.319803 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="48955357-bac8-4bc1-80f8-939c59861c52" containerName="registry-server" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.319942 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="48955357-bac8-4bc1-80f8-939c59861c52" containerName="registry-server" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.320909 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.322620 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.327074 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5"] Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.342519 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-fgjll_f9f46b79-f300-42de-a2c3-a35670822a3b/console/0.log" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.342795 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.366521 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mx42h\" (UniqueName: \"kubernetes.io/projected/f9f46b79-f300-42de-a2c3-a35670822a3b-kube-api-access-mx42h\") pod \"f9f46b79-f300-42de-a2c3-a35670822a3b\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.366687 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5\" (UID: \"1a7b21b3-8c0f-4904-8cee-63e55c2e1511\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.366730 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5\" (UID: \"1a7b21b3-8c0f-4904-8cee-63e55c2e1511\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.366773 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w59gk\" (UniqueName: \"kubernetes.io/projected/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-kube-api-access-w59gk\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5\" (UID: \"1a7b21b3-8c0f-4904-8cee-63e55c2e1511\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.372754 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9f46b79-f300-42de-a2c3-a35670822a3b-kube-api-access-mx42h" (OuterVolumeSpecName: "kube-api-access-mx42h") pod "f9f46b79-f300-42de-a2c3-a35670822a3b" (UID: "f9f46b79-f300-42de-a2c3-a35670822a3b"). InnerVolumeSpecName "kube-api-access-mx42h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.467730 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f9f46b79-f300-42de-a2c3-a35670822a3b-console-serving-cert\") pod \"f9f46b79-f300-42de-a2c3-a35670822a3b\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.467767 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-console-config\") pod \"f9f46b79-f300-42de-a2c3-a35670822a3b\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.467804 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-trusted-ca-bundle\") pod \"f9f46b79-f300-42de-a2c3-a35670822a3b\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.467833 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-service-ca\") pod \"f9f46b79-f300-42de-a2c3-a35670822a3b\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.467883 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-oauth-serving-cert\") pod \"f9f46b79-f300-42de-a2c3-a35670822a3b\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.467899 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f9f46b79-f300-42de-a2c3-a35670822a3b-console-oauth-config\") pod \"f9f46b79-f300-42de-a2c3-a35670822a3b\" (UID: \"f9f46b79-f300-42de-a2c3-a35670822a3b\") " Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.468057 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5\" (UID: \"1a7b21b3-8c0f-4904-8cee-63e55c2e1511\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.468089 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5\" (UID: \"1a7b21b3-8c0f-4904-8cee-63e55c2e1511\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.468126 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w59gk\" (UniqueName: \"kubernetes.io/projected/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-kube-api-access-w59gk\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5\" (UID: \"1a7b21b3-8c0f-4904-8cee-63e55c2e1511\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.468163 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mx42h\" (UniqueName: \"kubernetes.io/projected/f9f46b79-f300-42de-a2c3-a35670822a3b-kube-api-access-mx42h\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.469843 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-console-config" (OuterVolumeSpecName: "console-config") pod "f9f46b79-f300-42de-a2c3-a35670822a3b" (UID: "f9f46b79-f300-42de-a2c3-a35670822a3b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.469855 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "f9f46b79-f300-42de-a2c3-a35670822a3b" (UID: "f9f46b79-f300-42de-a2c3-a35670822a3b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.469893 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-service-ca" (OuterVolumeSpecName: "service-ca") pod "f9f46b79-f300-42de-a2c3-a35670822a3b" (UID: "f9f46b79-f300-42de-a2c3-a35670822a3b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.470388 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f9f46b79-f300-42de-a2c3-a35670822a3b" (UID: "f9f46b79-f300-42de-a2c3-a35670822a3b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.470397 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5\" (UID: \"1a7b21b3-8c0f-4904-8cee-63e55c2e1511\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.470664 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5\" (UID: \"1a7b21b3-8c0f-4904-8cee-63e55c2e1511\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.472858 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9f46b79-f300-42de-a2c3-a35670822a3b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "f9f46b79-f300-42de-a2c3-a35670822a3b" (UID: "f9f46b79-f300-42de-a2c3-a35670822a3b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.481406 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9f46b79-f300-42de-a2c3-a35670822a3b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "f9f46b79-f300-42de-a2c3-a35670822a3b" (UID: "f9f46b79-f300-42de-a2c3-a35670822a3b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.484471 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w59gk\" (UniqueName: \"kubernetes.io/projected/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-kube-api-access-w59gk\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5\" (UID: \"1a7b21b3-8c0f-4904-8cee-63e55c2e1511\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.569102 4932 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f9f46b79-f300-42de-a2c3-a35670822a3b-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.569386 4932 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.569467 4932 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.569538 4932 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.569965 4932 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f9f46b79-f300-42de-a2c3-a35670822a3b-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.570076 4932 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f9f46b79-f300-42de-a2c3-a35670822a3b-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.655942 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.889990 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5"] Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.950696 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" event={"ID":"1a7b21b3-8c0f-4904-8cee-63e55c2e1511","Type":"ContainerStarted","Data":"c38428a6e3b21c78c6d9d3924af8a505c31ccc899e94a488d5cc5f28d383bfc9"} Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.952411 4932 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-fgjll_f9f46b79-f300-42de-a2c3-a35670822a3b/console/0.log" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.952446 4932 generic.go:334] "Generic (PLEG): container finished" podID="f9f46b79-f300-42de-a2c3-a35670822a3b" containerID="fe1dbe69549de48e45fb41bc63a2fced32334e7eea18ca7b7aa834f59f93d40c" exitCode=2 Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.952467 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fgjll" event={"ID":"f9f46b79-f300-42de-a2c3-a35670822a3b","Type":"ContainerDied","Data":"fe1dbe69549de48e45fb41bc63a2fced32334e7eea18ca7b7aa834f59f93d40c"} Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.952482 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fgjll" event={"ID":"f9f46b79-f300-42de-a2c3-a35670822a3b","Type":"ContainerDied","Data":"a258bd567aafbecb3f6618d81a779cce26f985331e18b4b996cf0d535bef2a19"} Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.952497 4932 scope.go:117] "RemoveContainer" containerID="fe1dbe69549de48e45fb41bc63a2fced32334e7eea18ca7b7aa834f59f93d40c" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.952593 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fgjll" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.987671 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-fgjll"] Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.988005 4932 scope.go:117] "RemoveContainer" containerID="fe1dbe69549de48e45fb41bc63a2fced32334e7eea18ca7b7aa834f59f93d40c" Feb 18 19:47:49 crc kubenswrapper[4932]: E0218 19:47:49.988511 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe1dbe69549de48e45fb41bc63a2fced32334e7eea18ca7b7aa834f59f93d40c\": container with ID starting with fe1dbe69549de48e45fb41bc63a2fced32334e7eea18ca7b7aa834f59f93d40c not found: ID does not exist" containerID="fe1dbe69549de48e45fb41bc63a2fced32334e7eea18ca7b7aa834f59f93d40c" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.988556 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe1dbe69549de48e45fb41bc63a2fced32334e7eea18ca7b7aa834f59f93d40c"} err="failed to get container status \"fe1dbe69549de48e45fb41bc63a2fced32334e7eea18ca7b7aa834f59f93d40c\": rpc error: code = NotFound desc = could not find container \"fe1dbe69549de48e45fb41bc63a2fced32334e7eea18ca7b7aa834f59f93d40c\": container with ID starting with fe1dbe69549de48e45fb41bc63a2fced32334e7eea18ca7b7aa834f59f93d40c not found: ID does not exist" Feb 18 19:47:49 crc kubenswrapper[4932]: I0218 19:47:49.992636 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-fgjll"] Feb 18 19:47:50 crc kubenswrapper[4932]: I0218 19:47:50.966603 4932 generic.go:334] "Generic (PLEG): container finished" podID="1a7b21b3-8c0f-4904-8cee-63e55c2e1511" containerID="f51a4d3e89d0a5a8fd82a1fed44e1aefc0430abe321fe905050ced3e2abf82fb" exitCode=0 Feb 18 19:47:50 crc kubenswrapper[4932]: I0218 19:47:50.966706 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" event={"ID":"1a7b21b3-8c0f-4904-8cee-63e55c2e1511","Type":"ContainerDied","Data":"f51a4d3e89d0a5a8fd82a1fed44e1aefc0430abe321fe905050ced3e2abf82fb"} Feb 18 19:47:51 crc kubenswrapper[4932]: I0218 19:47:51.192705 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9f46b79-f300-42de-a2c3-a35670822a3b" path="/var/lib/kubelet/pods/f9f46b79-f300-42de-a2c3-a35670822a3b/volumes" Feb 18 19:47:52 crc kubenswrapper[4932]: I0218 19:47:52.987847 4932 generic.go:334] "Generic (PLEG): container finished" podID="1a7b21b3-8c0f-4904-8cee-63e55c2e1511" containerID="cb3448978aad8e47dae129c8a579b242cfd73b2277d5823d432278a0447baf5e" exitCode=0 Feb 18 19:47:52 crc kubenswrapper[4932]: I0218 19:47:52.987925 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" event={"ID":"1a7b21b3-8c0f-4904-8cee-63e55c2e1511","Type":"ContainerDied","Data":"cb3448978aad8e47dae129c8a579b242cfd73b2277d5823d432278a0447baf5e"} Feb 18 19:47:53 crc kubenswrapper[4932]: I0218 19:47:53.997024 4932 generic.go:334] "Generic (PLEG): container finished" podID="1a7b21b3-8c0f-4904-8cee-63e55c2e1511" containerID="793ad19debb6605cbb46965ff2a638a15cb70262a2b339958167435302ce32c2" exitCode=0 Feb 18 19:47:53 crc kubenswrapper[4932]: I0218 19:47:53.997089 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" event={"ID":"1a7b21b3-8c0f-4904-8cee-63e55c2e1511","Type":"ContainerDied","Data":"793ad19debb6605cbb46965ff2a638a15cb70262a2b339958167435302ce32c2"} Feb 18 19:47:55 crc kubenswrapper[4932]: I0218 19:47:55.225597 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" Feb 18 19:47:55 crc kubenswrapper[4932]: I0218 19:47:55.262270 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w59gk\" (UniqueName: \"kubernetes.io/projected/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-kube-api-access-w59gk\") pod \"1a7b21b3-8c0f-4904-8cee-63e55c2e1511\" (UID: \"1a7b21b3-8c0f-4904-8cee-63e55c2e1511\") " Feb 18 19:47:55 crc kubenswrapper[4932]: I0218 19:47:55.262359 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-bundle\") pod \"1a7b21b3-8c0f-4904-8cee-63e55c2e1511\" (UID: \"1a7b21b3-8c0f-4904-8cee-63e55c2e1511\") " Feb 18 19:47:55 crc kubenswrapper[4932]: I0218 19:47:55.262479 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-util\") pod \"1a7b21b3-8c0f-4904-8cee-63e55c2e1511\" (UID: \"1a7b21b3-8c0f-4904-8cee-63e55c2e1511\") " Feb 18 19:47:55 crc kubenswrapper[4932]: I0218 19:47:55.264074 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-bundle" (OuterVolumeSpecName: "bundle") pod "1a7b21b3-8c0f-4904-8cee-63e55c2e1511" (UID: "1a7b21b3-8c0f-4904-8cee-63e55c2e1511"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:47:55 crc kubenswrapper[4932]: I0218 19:47:55.271403 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-kube-api-access-w59gk" (OuterVolumeSpecName: "kube-api-access-w59gk") pod "1a7b21b3-8c0f-4904-8cee-63e55c2e1511" (UID: "1a7b21b3-8c0f-4904-8cee-63e55c2e1511"). InnerVolumeSpecName "kube-api-access-w59gk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:47:55 crc kubenswrapper[4932]: I0218 19:47:55.278153 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-util" (OuterVolumeSpecName: "util") pod "1a7b21b3-8c0f-4904-8cee-63e55c2e1511" (UID: "1a7b21b3-8c0f-4904-8cee-63e55c2e1511"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:47:55 crc kubenswrapper[4932]: I0218 19:47:55.365023 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w59gk\" (UniqueName: \"kubernetes.io/projected/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-kube-api-access-w59gk\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:55 crc kubenswrapper[4932]: I0218 19:47:55.365051 4932 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:55 crc kubenswrapper[4932]: I0218 19:47:55.365061 4932 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1a7b21b3-8c0f-4904-8cee-63e55c2e1511-util\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:56 crc kubenswrapper[4932]: I0218 19:47:56.013622 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" event={"ID":"1a7b21b3-8c0f-4904-8cee-63e55c2e1511","Type":"ContainerDied","Data":"c38428a6e3b21c78c6d9d3924af8a505c31ccc899e94a488d5cc5f28d383bfc9"} Feb 18 19:47:56 crc kubenswrapper[4932]: I0218 19:47:56.013667 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c38428a6e3b21c78c6d9d3924af8a505c31ccc899e94a488d5cc5f28d383bfc9" Feb 18 19:47:56 crc kubenswrapper[4932]: I0218 19:47:56.013696 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sv9n5" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.571990 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4"] Feb 18 19:48:05 crc kubenswrapper[4932]: E0218 19:48:05.572897 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a7b21b3-8c0f-4904-8cee-63e55c2e1511" containerName="pull" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.572914 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a7b21b3-8c0f-4904-8cee-63e55c2e1511" containerName="pull" Feb 18 19:48:05 crc kubenswrapper[4932]: E0218 19:48:05.572949 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a7b21b3-8c0f-4904-8cee-63e55c2e1511" containerName="util" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.572958 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a7b21b3-8c0f-4904-8cee-63e55c2e1511" containerName="util" Feb 18 19:48:05 crc kubenswrapper[4932]: E0218 19:48:05.572975 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a7b21b3-8c0f-4904-8cee-63e55c2e1511" containerName="extract" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.572982 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a7b21b3-8c0f-4904-8cee-63e55c2e1511" containerName="extract" Feb 18 19:48:05 crc kubenswrapper[4932]: E0218 19:48:05.572996 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9f46b79-f300-42de-a2c3-a35670822a3b" containerName="console" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.573003 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9f46b79-f300-42de-a2c3-a35670822a3b" containerName="console" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.573199 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9f46b79-f300-42de-a2c3-a35670822a3b" containerName="console" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.573218 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a7b21b3-8c0f-4904-8cee-63e55c2e1511" containerName="extract" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.573715 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.576117 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-dml6q" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.576783 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.576905 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.577008 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.577290 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.582910 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4"] Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.594567 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqd9c\" (UniqueName: \"kubernetes.io/projected/840c5b86-35ae-4432-9352-c830b6034aaf-kube-api-access-pqd9c\") pod \"metallb-operator-controller-manager-b4596b48b-tbqq4\" (UID: \"840c5b86-35ae-4432-9352-c830b6034aaf\") " pod="metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.594644 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/840c5b86-35ae-4432-9352-c830b6034aaf-webhook-cert\") pod \"metallb-operator-controller-manager-b4596b48b-tbqq4\" (UID: \"840c5b86-35ae-4432-9352-c830b6034aaf\") " pod="metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.594693 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/840c5b86-35ae-4432-9352-c830b6034aaf-apiservice-cert\") pod \"metallb-operator-controller-manager-b4596b48b-tbqq4\" (UID: \"840c5b86-35ae-4432-9352-c830b6034aaf\") " pod="metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.695553 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqd9c\" (UniqueName: \"kubernetes.io/projected/840c5b86-35ae-4432-9352-c830b6034aaf-kube-api-access-pqd9c\") pod \"metallb-operator-controller-manager-b4596b48b-tbqq4\" (UID: \"840c5b86-35ae-4432-9352-c830b6034aaf\") " pod="metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.695601 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/840c5b86-35ae-4432-9352-c830b6034aaf-webhook-cert\") pod \"metallb-operator-controller-manager-b4596b48b-tbqq4\" (UID: \"840c5b86-35ae-4432-9352-c830b6034aaf\") " pod="metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.695630 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/840c5b86-35ae-4432-9352-c830b6034aaf-apiservice-cert\") pod \"metallb-operator-controller-manager-b4596b48b-tbqq4\" (UID: \"840c5b86-35ae-4432-9352-c830b6034aaf\") " pod="metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.703873 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/840c5b86-35ae-4432-9352-c830b6034aaf-apiservice-cert\") pod \"metallb-operator-controller-manager-b4596b48b-tbqq4\" (UID: \"840c5b86-35ae-4432-9352-c830b6034aaf\") " pod="metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.703909 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/840c5b86-35ae-4432-9352-c830b6034aaf-webhook-cert\") pod \"metallb-operator-controller-manager-b4596b48b-tbqq4\" (UID: \"840c5b86-35ae-4432-9352-c830b6034aaf\") " pod="metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.717688 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqd9c\" (UniqueName: \"kubernetes.io/projected/840c5b86-35ae-4432-9352-c830b6034aaf-kube-api-access-pqd9c\") pod \"metallb-operator-controller-manager-b4596b48b-tbqq4\" (UID: \"840c5b86-35ae-4432-9352-c830b6034aaf\") " pod="metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4" Feb 18 19:48:05 crc kubenswrapper[4932]: I0218 19:48:05.889658 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4" Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.033433 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd"] Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.034319 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd" Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.040745 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.040767 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-ds2x2" Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.040953 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.053964 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd"] Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.100393 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmr6w\" (UniqueName: \"kubernetes.io/projected/c6536847-a5f8-42e0-9493-a016c3f8b53f-kube-api-access-vmr6w\") pod \"metallb-operator-webhook-server-7ddd796fb-hvjwd\" (UID: \"c6536847-a5f8-42e0-9493-a016c3f8b53f\") " pod="metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd" Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.100467 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c6536847-a5f8-42e0-9493-a016c3f8b53f-apiservice-cert\") pod \"metallb-operator-webhook-server-7ddd796fb-hvjwd\" (UID: \"c6536847-a5f8-42e0-9493-a016c3f8b53f\") " pod="metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd" Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.100494 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c6536847-a5f8-42e0-9493-a016c3f8b53f-webhook-cert\") pod \"metallb-operator-webhook-server-7ddd796fb-hvjwd\" (UID: \"c6536847-a5f8-42e0-9493-a016c3f8b53f\") " pod="metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd" Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.202142 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c6536847-a5f8-42e0-9493-a016c3f8b53f-apiservice-cert\") pod \"metallb-operator-webhook-server-7ddd796fb-hvjwd\" (UID: \"c6536847-a5f8-42e0-9493-a016c3f8b53f\") " pod="metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd" Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.202205 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c6536847-a5f8-42e0-9493-a016c3f8b53f-webhook-cert\") pod \"metallb-operator-webhook-server-7ddd796fb-hvjwd\" (UID: \"c6536847-a5f8-42e0-9493-a016c3f8b53f\") " pod="metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd" Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.206246 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c6536847-a5f8-42e0-9493-a016c3f8b53f-apiservice-cert\") pod \"metallb-operator-webhook-server-7ddd796fb-hvjwd\" (UID: \"c6536847-a5f8-42e0-9493-a016c3f8b53f\") " pod="metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd" Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.215789 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c6536847-a5f8-42e0-9493-a016c3f8b53f-webhook-cert\") pod \"metallb-operator-webhook-server-7ddd796fb-hvjwd\" (UID: \"c6536847-a5f8-42e0-9493-a016c3f8b53f\") " pod="metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd" Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.228295 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmr6w\" (UniqueName: \"kubernetes.io/projected/c6536847-a5f8-42e0-9493-a016c3f8b53f-kube-api-access-vmr6w\") pod \"metallb-operator-webhook-server-7ddd796fb-hvjwd\" (UID: \"c6536847-a5f8-42e0-9493-a016c3f8b53f\") " pod="metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd" Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.245238 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmr6w\" (UniqueName: \"kubernetes.io/projected/c6536847-a5f8-42e0-9493-a016c3f8b53f-kube-api-access-vmr6w\") pod \"metallb-operator-webhook-server-7ddd796fb-hvjwd\" (UID: \"c6536847-a5f8-42e0-9493-a016c3f8b53f\") " pod="metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd" Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.350988 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd" Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.560160 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4"] Feb 18 19:48:06 crc kubenswrapper[4932]: I0218 19:48:06.856204 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd"] Feb 18 19:48:07 crc kubenswrapper[4932]: I0218 19:48:07.081986 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4" event={"ID":"840c5b86-35ae-4432-9352-c830b6034aaf","Type":"ContainerStarted","Data":"e7b58ae27127882aabbdd8991b71be5274f019a48e6236e98d69aab35e4ae6cd"} Feb 18 19:48:07 crc kubenswrapper[4932]: I0218 19:48:07.083754 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd" event={"ID":"c6536847-a5f8-42e0-9493-a016c3f8b53f","Type":"ContainerStarted","Data":"a3c628572593a80b5070242a20e0978a51cfe973cbffc4ab07aa442f145e5e25"} Feb 18 19:48:12 crc kubenswrapper[4932]: I0218 19:48:12.123902 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4" event={"ID":"840c5b86-35ae-4432-9352-c830b6034aaf","Type":"ContainerStarted","Data":"a7859407860e12d75b4172291d51849ca03e8035e97d61473bbdaae55505d47e"} Feb 18 19:48:12 crc kubenswrapper[4932]: I0218 19:48:12.124482 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4" Feb 18 19:48:12 crc kubenswrapper[4932]: I0218 19:48:12.155387 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4" podStartSLOduration=1.9860522280000001 podStartE2EDuration="7.155351558s" podCreationTimestamp="2026-02-18 19:48:05 +0000 UTC" firstStartedPulling="2026-02-18 19:48:06.575794331 +0000 UTC m=+850.157749176" lastFinishedPulling="2026-02-18 19:48:11.745093661 +0000 UTC m=+855.327048506" observedRunningTime="2026-02-18 19:48:12.148211402 +0000 UTC m=+855.730166287" watchObservedRunningTime="2026-02-18 19:48:12.155351558 +0000 UTC m=+855.737306423" Feb 18 19:48:13 crc kubenswrapper[4932]: I0218 19:48:13.131151 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd" event={"ID":"c6536847-a5f8-42e0-9493-a016c3f8b53f","Type":"ContainerStarted","Data":"d2a18840a94e142760a0d0f0872aee1b013b1f8ca1c49a89447adb1fe93d4942"} Feb 18 19:48:13 crc kubenswrapper[4932]: I0218 19:48:13.131237 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd" Feb 18 19:48:13 crc kubenswrapper[4932]: I0218 19:48:13.159270 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd" podStartSLOduration=1.266925704 podStartE2EDuration="7.159250423s" podCreationTimestamp="2026-02-18 19:48:06 +0000 UTC" firstStartedPulling="2026-02-18 19:48:06.862248255 +0000 UTC m=+850.444203100" lastFinishedPulling="2026-02-18 19:48:12.754572974 +0000 UTC m=+856.336527819" observedRunningTime="2026-02-18 19:48:13.157053829 +0000 UTC m=+856.739008674" watchObservedRunningTime="2026-02-18 19:48:13.159250423 +0000 UTC m=+856.741205268" Feb 18 19:48:19 crc kubenswrapper[4932]: I0218 19:48:19.865685 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9s8nd"] Feb 18 19:48:19 crc kubenswrapper[4932]: I0218 19:48:19.867410 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9s8nd" Feb 18 19:48:19 crc kubenswrapper[4932]: I0218 19:48:19.882617 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9s8nd"] Feb 18 19:48:19 crc kubenswrapper[4932]: I0218 19:48:19.905835 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba1a775b-f93a-44fb-8588-9088a479826f-catalog-content\") pod \"certified-operators-9s8nd\" (UID: \"ba1a775b-f93a-44fb-8588-9088a479826f\") " pod="openshift-marketplace/certified-operators-9s8nd" Feb 18 19:48:19 crc kubenswrapper[4932]: I0218 19:48:19.905892 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba1a775b-f93a-44fb-8588-9088a479826f-utilities\") pod \"certified-operators-9s8nd\" (UID: \"ba1a775b-f93a-44fb-8588-9088a479826f\") " pod="openshift-marketplace/certified-operators-9s8nd" Feb 18 19:48:19 crc kubenswrapper[4932]: I0218 19:48:19.905921 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjm7t\" (UniqueName: \"kubernetes.io/projected/ba1a775b-f93a-44fb-8588-9088a479826f-kube-api-access-cjm7t\") pod \"certified-operators-9s8nd\" (UID: \"ba1a775b-f93a-44fb-8588-9088a479826f\") " pod="openshift-marketplace/certified-operators-9s8nd" Feb 18 19:48:20 crc kubenswrapper[4932]: I0218 19:48:20.007327 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba1a775b-f93a-44fb-8588-9088a479826f-catalog-content\") pod \"certified-operators-9s8nd\" (UID: \"ba1a775b-f93a-44fb-8588-9088a479826f\") " pod="openshift-marketplace/certified-operators-9s8nd" Feb 18 19:48:20 crc kubenswrapper[4932]: I0218 19:48:20.007386 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba1a775b-f93a-44fb-8588-9088a479826f-utilities\") pod \"certified-operators-9s8nd\" (UID: \"ba1a775b-f93a-44fb-8588-9088a479826f\") " pod="openshift-marketplace/certified-operators-9s8nd" Feb 18 19:48:20 crc kubenswrapper[4932]: I0218 19:48:20.007423 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjm7t\" (UniqueName: \"kubernetes.io/projected/ba1a775b-f93a-44fb-8588-9088a479826f-kube-api-access-cjm7t\") pod \"certified-operators-9s8nd\" (UID: \"ba1a775b-f93a-44fb-8588-9088a479826f\") " pod="openshift-marketplace/certified-operators-9s8nd" Feb 18 19:48:20 crc kubenswrapper[4932]: I0218 19:48:20.008189 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba1a775b-f93a-44fb-8588-9088a479826f-catalog-content\") pod \"certified-operators-9s8nd\" (UID: \"ba1a775b-f93a-44fb-8588-9088a479826f\") " pod="openshift-marketplace/certified-operators-9s8nd" Feb 18 19:48:20 crc kubenswrapper[4932]: I0218 19:48:20.008431 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba1a775b-f93a-44fb-8588-9088a479826f-utilities\") pod \"certified-operators-9s8nd\" (UID: \"ba1a775b-f93a-44fb-8588-9088a479826f\") " pod="openshift-marketplace/certified-operators-9s8nd" Feb 18 19:48:20 crc kubenswrapper[4932]: I0218 19:48:20.025812 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjm7t\" (UniqueName: \"kubernetes.io/projected/ba1a775b-f93a-44fb-8588-9088a479826f-kube-api-access-cjm7t\") pod \"certified-operators-9s8nd\" (UID: \"ba1a775b-f93a-44fb-8588-9088a479826f\") " pod="openshift-marketplace/certified-operators-9s8nd" Feb 18 19:48:20 crc kubenswrapper[4932]: I0218 19:48:20.185761 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9s8nd" Feb 18 19:48:20 crc kubenswrapper[4932]: I0218 19:48:20.769317 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9s8nd"] Feb 18 19:48:21 crc kubenswrapper[4932]: I0218 19:48:21.191035 4932 generic.go:334] "Generic (PLEG): container finished" podID="ba1a775b-f93a-44fb-8588-9088a479826f" containerID="9d36a17ebdedc74f296a0848a04a66fa8c149688f4ffdcd30dd012f47bd31290" exitCode=0 Feb 18 19:48:21 crc kubenswrapper[4932]: I0218 19:48:21.191080 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9s8nd" event={"ID":"ba1a775b-f93a-44fb-8588-9088a479826f","Type":"ContainerDied","Data":"9d36a17ebdedc74f296a0848a04a66fa8c149688f4ffdcd30dd012f47bd31290"} Feb 18 19:48:21 crc kubenswrapper[4932]: I0218 19:48:21.191111 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9s8nd" event={"ID":"ba1a775b-f93a-44fb-8588-9088a479826f","Type":"ContainerStarted","Data":"06075c9ddc5258f34d00a13d1ce1cf80729081c6aedfbee9dcd4bb5fc15000c0"} Feb 18 19:48:22 crc kubenswrapper[4932]: I0218 19:48:22.198058 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9s8nd" event={"ID":"ba1a775b-f93a-44fb-8588-9088a479826f","Type":"ContainerStarted","Data":"adf6f8785982d50f3906cfbf31ba59e21c3ab07e26b6be7924dfb6844d850fe6"} Feb 18 19:48:23 crc kubenswrapper[4932]: I0218 19:48:23.205611 4932 generic.go:334] "Generic (PLEG): container finished" podID="ba1a775b-f93a-44fb-8588-9088a479826f" containerID="adf6f8785982d50f3906cfbf31ba59e21c3ab07e26b6be7924dfb6844d850fe6" exitCode=0 Feb 18 19:48:23 crc kubenswrapper[4932]: I0218 19:48:23.205965 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9s8nd" event={"ID":"ba1a775b-f93a-44fb-8588-9088a479826f","Type":"ContainerDied","Data":"adf6f8785982d50f3906cfbf31ba59e21c3ab07e26b6be7924dfb6844d850fe6"} Feb 18 19:48:25 crc kubenswrapper[4932]: I0218 19:48:25.219900 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9s8nd" event={"ID":"ba1a775b-f93a-44fb-8588-9088a479826f","Type":"ContainerStarted","Data":"f31cc9aadd4c97dfa7d54ddf9bdc33fd72fe82d104b35bca6fa812cd753d5880"} Feb 18 19:48:25 crc kubenswrapper[4932]: I0218 19:48:25.243535 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9s8nd" podStartSLOduration=3.127745989 podStartE2EDuration="6.243516041s" podCreationTimestamp="2026-02-18 19:48:19 +0000 UTC" firstStartedPulling="2026-02-18 19:48:21.192226899 +0000 UTC m=+864.774181734" lastFinishedPulling="2026-02-18 19:48:24.307996931 +0000 UTC m=+867.889951786" observedRunningTime="2026-02-18 19:48:25.240724032 +0000 UTC m=+868.822678877" watchObservedRunningTime="2026-02-18 19:48:25.243516041 +0000 UTC m=+868.825470896" Feb 18 19:48:25 crc kubenswrapper[4932]: I0218 19:48:25.662677 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q5rjw"] Feb 18 19:48:25 crc kubenswrapper[4932]: I0218 19:48:25.663855 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q5rjw" Feb 18 19:48:25 crc kubenswrapper[4932]: I0218 19:48:25.711460 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q5rjw"] Feb 18 19:48:25 crc kubenswrapper[4932]: I0218 19:48:25.775720 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/743a9d5a-33ac-4937-a081-195105ed16b3-utilities\") pod \"redhat-marketplace-q5rjw\" (UID: \"743a9d5a-33ac-4937-a081-195105ed16b3\") " pod="openshift-marketplace/redhat-marketplace-q5rjw" Feb 18 19:48:25 crc kubenswrapper[4932]: I0218 19:48:25.775843 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjl62\" (UniqueName: \"kubernetes.io/projected/743a9d5a-33ac-4937-a081-195105ed16b3-kube-api-access-pjl62\") pod \"redhat-marketplace-q5rjw\" (UID: \"743a9d5a-33ac-4937-a081-195105ed16b3\") " pod="openshift-marketplace/redhat-marketplace-q5rjw" Feb 18 19:48:25 crc kubenswrapper[4932]: I0218 19:48:25.775894 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/743a9d5a-33ac-4937-a081-195105ed16b3-catalog-content\") pod \"redhat-marketplace-q5rjw\" (UID: \"743a9d5a-33ac-4937-a081-195105ed16b3\") " pod="openshift-marketplace/redhat-marketplace-q5rjw" Feb 18 19:48:25 crc kubenswrapper[4932]: I0218 19:48:25.877311 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjl62\" (UniqueName: \"kubernetes.io/projected/743a9d5a-33ac-4937-a081-195105ed16b3-kube-api-access-pjl62\") pod \"redhat-marketplace-q5rjw\" (UID: \"743a9d5a-33ac-4937-a081-195105ed16b3\") " pod="openshift-marketplace/redhat-marketplace-q5rjw" Feb 18 19:48:25 crc kubenswrapper[4932]: I0218 19:48:25.877352 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/743a9d5a-33ac-4937-a081-195105ed16b3-catalog-content\") pod \"redhat-marketplace-q5rjw\" (UID: \"743a9d5a-33ac-4937-a081-195105ed16b3\") " pod="openshift-marketplace/redhat-marketplace-q5rjw" Feb 18 19:48:25 crc kubenswrapper[4932]: I0218 19:48:25.877420 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/743a9d5a-33ac-4937-a081-195105ed16b3-utilities\") pod \"redhat-marketplace-q5rjw\" (UID: \"743a9d5a-33ac-4937-a081-195105ed16b3\") " pod="openshift-marketplace/redhat-marketplace-q5rjw" Feb 18 19:48:25 crc kubenswrapper[4932]: I0218 19:48:25.877837 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/743a9d5a-33ac-4937-a081-195105ed16b3-utilities\") pod \"redhat-marketplace-q5rjw\" (UID: \"743a9d5a-33ac-4937-a081-195105ed16b3\") " pod="openshift-marketplace/redhat-marketplace-q5rjw" Feb 18 19:48:25 crc kubenswrapper[4932]: I0218 19:48:25.877951 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/743a9d5a-33ac-4937-a081-195105ed16b3-catalog-content\") pod \"redhat-marketplace-q5rjw\" (UID: \"743a9d5a-33ac-4937-a081-195105ed16b3\") " pod="openshift-marketplace/redhat-marketplace-q5rjw" Feb 18 19:48:25 crc kubenswrapper[4932]: I0218 19:48:25.893798 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjl62\" (UniqueName: \"kubernetes.io/projected/743a9d5a-33ac-4937-a081-195105ed16b3-kube-api-access-pjl62\") pod \"redhat-marketplace-q5rjw\" (UID: \"743a9d5a-33ac-4937-a081-195105ed16b3\") " pod="openshift-marketplace/redhat-marketplace-q5rjw" Feb 18 19:48:25 crc kubenswrapper[4932]: I0218 19:48:25.977687 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q5rjw" Feb 18 19:48:26 crc kubenswrapper[4932]: I0218 19:48:26.244281 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q5rjw"] Feb 18 19:48:26 crc kubenswrapper[4932]: I0218 19:48:26.355468 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7ddd796fb-hvjwd" Feb 18 19:48:27 crc kubenswrapper[4932]: I0218 19:48:27.238509 4932 generic.go:334] "Generic (PLEG): container finished" podID="743a9d5a-33ac-4937-a081-195105ed16b3" containerID="3604a684daca1b21cf880f0d994d084bf3376d4f9eda7835931e9e96b1c4b9ef" exitCode=0 Feb 18 19:48:27 crc kubenswrapper[4932]: I0218 19:48:27.238631 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5rjw" event={"ID":"743a9d5a-33ac-4937-a081-195105ed16b3","Type":"ContainerDied","Data":"3604a684daca1b21cf880f0d994d084bf3376d4f9eda7835931e9e96b1c4b9ef"} Feb 18 19:48:27 crc kubenswrapper[4932]: I0218 19:48:27.238959 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5rjw" event={"ID":"743a9d5a-33ac-4937-a081-195105ed16b3","Type":"ContainerStarted","Data":"b49be4835d48c0d3c2ee538792e112030676109057e86ba80e039cfaad394592"} Feb 18 19:48:29 crc kubenswrapper[4932]: I0218 19:48:29.258656 4932 generic.go:334] "Generic (PLEG): container finished" podID="743a9d5a-33ac-4937-a081-195105ed16b3" containerID="ada54d78ad5b66f0b05c4a0f171470d8f4fe3a536e318eb231e890b0d3bb4e21" exitCode=0 Feb 18 19:48:29 crc kubenswrapper[4932]: I0218 19:48:29.258735 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5rjw" event={"ID":"743a9d5a-33ac-4937-a081-195105ed16b3","Type":"ContainerDied","Data":"ada54d78ad5b66f0b05c4a0f171470d8f4fe3a536e318eb231e890b0d3bb4e21"} Feb 18 19:48:30 crc kubenswrapper[4932]: I0218 19:48:30.186018 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9s8nd" Feb 18 19:48:30 crc kubenswrapper[4932]: I0218 19:48:30.186639 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9s8nd" Feb 18 19:48:30 crc kubenswrapper[4932]: I0218 19:48:30.254562 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9s8nd" Feb 18 19:48:30 crc kubenswrapper[4932]: I0218 19:48:30.266698 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5rjw" event={"ID":"743a9d5a-33ac-4937-a081-195105ed16b3","Type":"ContainerStarted","Data":"3862bb1200e50c38a03d6ddf1dcd57b2990f737ae34abf11ff424ed05282b9ae"} Feb 18 19:48:30 crc kubenswrapper[4932]: I0218 19:48:30.312844 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q5rjw" podStartSLOduration=2.74194794 podStartE2EDuration="5.312828275s" podCreationTimestamp="2026-02-18 19:48:25 +0000 UTC" firstStartedPulling="2026-02-18 19:48:27.241005827 +0000 UTC m=+870.822960692" lastFinishedPulling="2026-02-18 19:48:29.811886182 +0000 UTC m=+873.393841027" observedRunningTime="2026-02-18 19:48:30.307237997 +0000 UTC m=+873.889192862" watchObservedRunningTime="2026-02-18 19:48:30.312828275 +0000 UTC m=+873.894783130" Feb 18 19:48:30 crc kubenswrapper[4932]: I0218 19:48:30.317059 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9s8nd" Feb 18 19:48:31 crc kubenswrapper[4932]: I0218 19:48:31.457844 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9s8nd"] Feb 18 19:48:32 crc kubenswrapper[4932]: I0218 19:48:32.281451 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9s8nd" podUID="ba1a775b-f93a-44fb-8588-9088a479826f" containerName="registry-server" containerID="cri-o://f31cc9aadd4c97dfa7d54ddf9bdc33fd72fe82d104b35bca6fa812cd753d5880" gracePeriod=2 Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.263730 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9s8nd" Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.276357 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba1a775b-f93a-44fb-8588-9088a479826f-utilities\") pod \"ba1a775b-f93a-44fb-8588-9088a479826f\" (UID: \"ba1a775b-f93a-44fb-8588-9088a479826f\") " Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.276406 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjm7t\" (UniqueName: \"kubernetes.io/projected/ba1a775b-f93a-44fb-8588-9088a479826f-kube-api-access-cjm7t\") pod \"ba1a775b-f93a-44fb-8588-9088a479826f\" (UID: \"ba1a775b-f93a-44fb-8588-9088a479826f\") " Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.276486 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba1a775b-f93a-44fb-8588-9088a479826f-catalog-content\") pod \"ba1a775b-f93a-44fb-8588-9088a479826f\" (UID: \"ba1a775b-f93a-44fb-8588-9088a479826f\") " Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.277289 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba1a775b-f93a-44fb-8588-9088a479826f-utilities" (OuterVolumeSpecName: "utilities") pod "ba1a775b-f93a-44fb-8588-9088a479826f" (UID: "ba1a775b-f93a-44fb-8588-9088a479826f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.281372 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba1a775b-f93a-44fb-8588-9088a479826f-kube-api-access-cjm7t" (OuterVolumeSpecName: "kube-api-access-cjm7t") pod "ba1a775b-f93a-44fb-8588-9088a479826f" (UID: "ba1a775b-f93a-44fb-8588-9088a479826f"). InnerVolumeSpecName "kube-api-access-cjm7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.289254 4932 generic.go:334] "Generic (PLEG): container finished" podID="ba1a775b-f93a-44fb-8588-9088a479826f" containerID="f31cc9aadd4c97dfa7d54ddf9bdc33fd72fe82d104b35bca6fa812cd753d5880" exitCode=0 Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.289311 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9s8nd" Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.289304 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9s8nd" event={"ID":"ba1a775b-f93a-44fb-8588-9088a479826f","Type":"ContainerDied","Data":"f31cc9aadd4c97dfa7d54ddf9bdc33fd72fe82d104b35bca6fa812cd753d5880"} Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.289498 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9s8nd" event={"ID":"ba1a775b-f93a-44fb-8588-9088a479826f","Type":"ContainerDied","Data":"06075c9ddc5258f34d00a13d1ce1cf80729081c6aedfbee9dcd4bb5fc15000c0"} Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.289529 4932 scope.go:117] "RemoveContainer" containerID="f31cc9aadd4c97dfa7d54ddf9bdc33fd72fe82d104b35bca6fa812cd753d5880" Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.309054 4932 scope.go:117] "RemoveContainer" containerID="adf6f8785982d50f3906cfbf31ba59e21c3ab07e26b6be7924dfb6844d850fe6" Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.328972 4932 scope.go:117] "RemoveContainer" containerID="9d36a17ebdedc74f296a0848a04a66fa8c149688f4ffdcd30dd012f47bd31290" Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.348332 4932 scope.go:117] "RemoveContainer" containerID="f31cc9aadd4c97dfa7d54ddf9bdc33fd72fe82d104b35bca6fa812cd753d5880" Feb 18 19:48:33 crc kubenswrapper[4932]: E0218 19:48:33.348780 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f31cc9aadd4c97dfa7d54ddf9bdc33fd72fe82d104b35bca6fa812cd753d5880\": container with ID starting with f31cc9aadd4c97dfa7d54ddf9bdc33fd72fe82d104b35bca6fa812cd753d5880 not found: ID does not exist" containerID="f31cc9aadd4c97dfa7d54ddf9bdc33fd72fe82d104b35bca6fa812cd753d5880" Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.348827 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f31cc9aadd4c97dfa7d54ddf9bdc33fd72fe82d104b35bca6fa812cd753d5880"} err="failed to get container status \"f31cc9aadd4c97dfa7d54ddf9bdc33fd72fe82d104b35bca6fa812cd753d5880\": rpc error: code = NotFound desc = could not find container \"f31cc9aadd4c97dfa7d54ddf9bdc33fd72fe82d104b35bca6fa812cd753d5880\": container with ID starting with f31cc9aadd4c97dfa7d54ddf9bdc33fd72fe82d104b35bca6fa812cd753d5880 not found: ID does not exist" Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.348856 4932 scope.go:117] "RemoveContainer" containerID="adf6f8785982d50f3906cfbf31ba59e21c3ab07e26b6be7924dfb6844d850fe6" Feb 18 19:48:33 crc kubenswrapper[4932]: E0218 19:48:33.349163 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adf6f8785982d50f3906cfbf31ba59e21c3ab07e26b6be7924dfb6844d850fe6\": container with ID starting with adf6f8785982d50f3906cfbf31ba59e21c3ab07e26b6be7924dfb6844d850fe6 not found: ID does not exist" containerID="adf6f8785982d50f3906cfbf31ba59e21c3ab07e26b6be7924dfb6844d850fe6" Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.349225 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adf6f8785982d50f3906cfbf31ba59e21c3ab07e26b6be7924dfb6844d850fe6"} err="failed to get container status \"adf6f8785982d50f3906cfbf31ba59e21c3ab07e26b6be7924dfb6844d850fe6\": rpc error: code = NotFound desc = could not find container \"adf6f8785982d50f3906cfbf31ba59e21c3ab07e26b6be7924dfb6844d850fe6\": container with ID starting with adf6f8785982d50f3906cfbf31ba59e21c3ab07e26b6be7924dfb6844d850fe6 not found: ID does not exist" Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.349251 4932 scope.go:117] "RemoveContainer" containerID="9d36a17ebdedc74f296a0848a04a66fa8c149688f4ffdcd30dd012f47bd31290" Feb 18 19:48:33 crc kubenswrapper[4932]: E0218 19:48:33.349578 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d36a17ebdedc74f296a0848a04a66fa8c149688f4ffdcd30dd012f47bd31290\": container with ID starting with 9d36a17ebdedc74f296a0848a04a66fa8c149688f4ffdcd30dd012f47bd31290 not found: ID does not exist" containerID="9d36a17ebdedc74f296a0848a04a66fa8c149688f4ffdcd30dd012f47bd31290" Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.349614 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d36a17ebdedc74f296a0848a04a66fa8c149688f4ffdcd30dd012f47bd31290"} err="failed to get container status \"9d36a17ebdedc74f296a0848a04a66fa8c149688f4ffdcd30dd012f47bd31290\": rpc error: code = NotFound desc = could not find container \"9d36a17ebdedc74f296a0848a04a66fa8c149688f4ffdcd30dd012f47bd31290\": container with ID starting with 9d36a17ebdedc74f296a0848a04a66fa8c149688f4ffdcd30dd012f47bd31290 not found: ID does not exist" Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.368978 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba1a775b-f93a-44fb-8588-9088a479826f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba1a775b-f93a-44fb-8588-9088a479826f" (UID: "ba1a775b-f93a-44fb-8588-9088a479826f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.377950 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba1a775b-f93a-44fb-8588-9088a479826f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.377975 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjm7t\" (UniqueName: \"kubernetes.io/projected/ba1a775b-f93a-44fb-8588-9088a479826f-kube-api-access-cjm7t\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.377989 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba1a775b-f93a-44fb-8588-9088a479826f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.624335 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9s8nd"] Feb 18 19:48:33 crc kubenswrapper[4932]: I0218 19:48:33.629278 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9s8nd"] Feb 18 19:48:35 crc kubenswrapper[4932]: I0218 19:48:35.193074 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba1a775b-f93a-44fb-8588-9088a479826f" path="/var/lib/kubelet/pods/ba1a775b-f93a-44fb-8588-9088a479826f/volumes" Feb 18 19:48:35 crc kubenswrapper[4932]: I0218 19:48:35.978374 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q5rjw" Feb 18 19:48:35 crc kubenswrapper[4932]: I0218 19:48:35.978438 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q5rjw" Feb 18 19:48:36 crc kubenswrapper[4932]: I0218 19:48:36.017598 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q5rjw" Feb 18 19:48:36 crc kubenswrapper[4932]: I0218 19:48:36.350430 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q5rjw" Feb 18 19:48:38 crc kubenswrapper[4932]: I0218 19:48:38.657307 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q5rjw"] Feb 18 19:48:38 crc kubenswrapper[4932]: I0218 19:48:38.657854 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q5rjw" podUID="743a9d5a-33ac-4937-a081-195105ed16b3" containerName="registry-server" containerID="cri-o://3862bb1200e50c38a03d6ddf1dcd57b2990f737ae34abf11ff424ed05282b9ae" gracePeriod=2 Feb 18 19:48:39 crc kubenswrapper[4932]: I0218 19:48:39.336193 4932 generic.go:334] "Generic (PLEG): container finished" podID="743a9d5a-33ac-4937-a081-195105ed16b3" containerID="3862bb1200e50c38a03d6ddf1dcd57b2990f737ae34abf11ff424ed05282b9ae" exitCode=0 Feb 18 19:48:39 crc kubenswrapper[4932]: I0218 19:48:39.336358 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5rjw" event={"ID":"743a9d5a-33ac-4937-a081-195105ed16b3","Type":"ContainerDied","Data":"3862bb1200e50c38a03d6ddf1dcd57b2990f737ae34abf11ff424ed05282b9ae"} Feb 18 19:48:39 crc kubenswrapper[4932]: I0218 19:48:39.608998 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q5rjw" Feb 18 19:48:39 crc kubenswrapper[4932]: I0218 19:48:39.662724 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjl62\" (UniqueName: \"kubernetes.io/projected/743a9d5a-33ac-4937-a081-195105ed16b3-kube-api-access-pjl62\") pod \"743a9d5a-33ac-4937-a081-195105ed16b3\" (UID: \"743a9d5a-33ac-4937-a081-195105ed16b3\") " Feb 18 19:48:39 crc kubenswrapper[4932]: I0218 19:48:39.662846 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/743a9d5a-33ac-4937-a081-195105ed16b3-utilities\") pod \"743a9d5a-33ac-4937-a081-195105ed16b3\" (UID: \"743a9d5a-33ac-4937-a081-195105ed16b3\") " Feb 18 19:48:39 crc kubenswrapper[4932]: I0218 19:48:39.662968 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/743a9d5a-33ac-4937-a081-195105ed16b3-catalog-content\") pod \"743a9d5a-33ac-4937-a081-195105ed16b3\" (UID: \"743a9d5a-33ac-4937-a081-195105ed16b3\") " Feb 18 19:48:39 crc kubenswrapper[4932]: I0218 19:48:39.663955 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/743a9d5a-33ac-4937-a081-195105ed16b3-utilities" (OuterVolumeSpecName: "utilities") pod "743a9d5a-33ac-4937-a081-195105ed16b3" (UID: "743a9d5a-33ac-4937-a081-195105ed16b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:48:39 crc kubenswrapper[4932]: I0218 19:48:39.669547 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/743a9d5a-33ac-4937-a081-195105ed16b3-kube-api-access-pjl62" (OuterVolumeSpecName: "kube-api-access-pjl62") pod "743a9d5a-33ac-4937-a081-195105ed16b3" (UID: "743a9d5a-33ac-4937-a081-195105ed16b3"). InnerVolumeSpecName "kube-api-access-pjl62". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:48:39 crc kubenswrapper[4932]: I0218 19:48:39.696469 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/743a9d5a-33ac-4937-a081-195105ed16b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "743a9d5a-33ac-4937-a081-195105ed16b3" (UID: "743a9d5a-33ac-4937-a081-195105ed16b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:48:39 crc kubenswrapper[4932]: I0218 19:48:39.764855 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/743a9d5a-33ac-4937-a081-195105ed16b3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:39 crc kubenswrapper[4932]: I0218 19:48:39.764949 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjl62\" (UniqueName: \"kubernetes.io/projected/743a9d5a-33ac-4937-a081-195105ed16b3-kube-api-access-pjl62\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:39 crc kubenswrapper[4932]: I0218 19:48:39.764974 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/743a9d5a-33ac-4937-a081-195105ed16b3-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:40 crc kubenswrapper[4932]: I0218 19:48:40.345032 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q5rjw" event={"ID":"743a9d5a-33ac-4937-a081-195105ed16b3","Type":"ContainerDied","Data":"b49be4835d48c0d3c2ee538792e112030676109057e86ba80e039cfaad394592"} Feb 18 19:48:40 crc kubenswrapper[4932]: I0218 19:48:40.345083 4932 scope.go:117] "RemoveContainer" containerID="3862bb1200e50c38a03d6ddf1dcd57b2990f737ae34abf11ff424ed05282b9ae" Feb 18 19:48:40 crc kubenswrapper[4932]: I0218 19:48:40.345109 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q5rjw" Feb 18 19:48:40 crc kubenswrapper[4932]: I0218 19:48:40.365079 4932 scope.go:117] "RemoveContainer" containerID="ada54d78ad5b66f0b05c4a0f171470d8f4fe3a536e318eb231e890b0d3bb4e21" Feb 18 19:48:40 crc kubenswrapper[4932]: I0218 19:48:40.381415 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q5rjw"] Feb 18 19:48:40 crc kubenswrapper[4932]: I0218 19:48:40.388474 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q5rjw"] Feb 18 19:48:40 crc kubenswrapper[4932]: I0218 19:48:40.402890 4932 scope.go:117] "RemoveContainer" containerID="3604a684daca1b21cf880f0d994d084bf3376d4f9eda7835931e9e96b1c4b9ef" Feb 18 19:48:41 crc kubenswrapper[4932]: I0218 19:48:41.186773 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="743a9d5a-33ac-4937-a081-195105ed16b3" path="/var/lib/kubelet/pods/743a9d5a-33ac-4937-a081-195105ed16b3/volumes" Feb 18 19:48:45 crc kubenswrapper[4932]: I0218 19:48:45.893315 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-b4596b48b-tbqq4" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.785514 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-7k58r"] Feb 18 19:48:46 crc kubenswrapper[4932]: E0218 19:48:46.785833 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba1a775b-f93a-44fb-8588-9088a479826f" containerName="registry-server" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.785856 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba1a775b-f93a-44fb-8588-9088a479826f" containerName="registry-server" Feb 18 19:48:46 crc kubenswrapper[4932]: E0218 19:48:46.785877 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="743a9d5a-33ac-4937-a081-195105ed16b3" containerName="extract-utilities" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.785887 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="743a9d5a-33ac-4937-a081-195105ed16b3" containerName="extract-utilities" Feb 18 19:48:46 crc kubenswrapper[4932]: E0218 19:48:46.785900 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="743a9d5a-33ac-4937-a081-195105ed16b3" containerName="registry-server" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.785908 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="743a9d5a-33ac-4937-a081-195105ed16b3" containerName="registry-server" Feb 18 19:48:46 crc kubenswrapper[4932]: E0218 19:48:46.785924 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="743a9d5a-33ac-4937-a081-195105ed16b3" containerName="extract-content" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.785931 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="743a9d5a-33ac-4937-a081-195105ed16b3" containerName="extract-content" Feb 18 19:48:46 crc kubenswrapper[4932]: E0218 19:48:46.785943 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba1a775b-f93a-44fb-8588-9088a479826f" containerName="extract-content" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.785951 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba1a775b-f93a-44fb-8588-9088a479826f" containerName="extract-content" Feb 18 19:48:46 crc kubenswrapper[4932]: E0218 19:48:46.785962 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba1a775b-f93a-44fb-8588-9088a479826f" containerName="extract-utilities" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.785970 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba1a775b-f93a-44fb-8588-9088a479826f" containerName="extract-utilities" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.786111 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="743a9d5a-33ac-4937-a081-195105ed16b3" containerName="registry-server" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.786126 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba1a775b-f93a-44fb-8588-9088a479826f" containerName="registry-server" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.786679 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7k58r" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.788306 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.788343 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-zltgr" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.794344 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-d4twn"] Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.800304 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.803474 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.803499 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.803921 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-7k58r"] Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.864748 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-bk4kx"] Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.865371 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x44wm\" (UniqueName: \"kubernetes.io/projected/849240df-e1e2-40a7-8406-b1033e46b15e-kube-api-access-x44wm\") pod \"frr-k8s-webhook-server-78b44bf5bb-7k58r\" (UID: \"849240df-e1e2-40a7-8406-b1033e46b15e\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7k58r" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.865419 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/156971d4-9e01-4970-bb94-4511a2c7c94b-frr-conf\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.865449 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/156971d4-9e01-4970-bb94-4511a2c7c94b-metrics-certs\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.865560 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/156971d4-9e01-4970-bb94-4511a2c7c94b-metrics\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.865644 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/156971d4-9e01-4970-bb94-4511a2c7c94b-frr-startup\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.865729 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/156971d4-9e01-4970-bb94-4511a2c7c94b-frr-sockets\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.865786 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hqb7\" (UniqueName: \"kubernetes.io/projected/156971d4-9e01-4970-bb94-4511a2c7c94b-kube-api-access-7hqb7\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.865828 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/849240df-e1e2-40a7-8406-b1033e46b15e-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-7k58r\" (UID: \"849240df-e1e2-40a7-8406-b1033e46b15e\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7k58r" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.865870 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/156971d4-9e01-4970-bb94-4511a2c7c94b-reloader\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.866775 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-bk4kx" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.869841 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.869847 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.870078 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.871898 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-mxgbg" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.886568 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-7pdzl"] Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.887716 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-7pdzl" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.889819 4932 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.897822 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-7pdzl"] Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.967291 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhb6z\" (UniqueName: \"kubernetes.io/projected/7f2268c2-3ba6-4726-82ae-a80c1a5efb85-kube-api-access-vhb6z\") pod \"controller-69bbfbf88f-7pdzl\" (UID: \"7f2268c2-3ba6-4726-82ae-a80c1a5efb85\") " pod="metallb-system/controller-69bbfbf88f-7pdzl" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.967348 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/156971d4-9e01-4970-bb94-4511a2c7c94b-frr-startup\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.967374 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s897\" (UniqueName: \"kubernetes.io/projected/419fb9f6-a8b4-4b14-bc10-179c9964f712-kube-api-access-9s897\") pod \"speaker-bk4kx\" (UID: \"419fb9f6-a8b4-4b14-bc10-179c9964f712\") " pod="metallb-system/speaker-bk4kx" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.967418 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/419fb9f6-a8b4-4b14-bc10-179c9964f712-metrics-certs\") pod \"speaker-bk4kx\" (UID: \"419fb9f6-a8b4-4b14-bc10-179c9964f712\") " pod="metallb-system/speaker-bk4kx" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.967436 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/156971d4-9e01-4970-bb94-4511a2c7c94b-frr-sockets\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.967473 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7f2268c2-3ba6-4726-82ae-a80c1a5efb85-cert\") pod \"controller-69bbfbf88f-7pdzl\" (UID: \"7f2268c2-3ba6-4726-82ae-a80c1a5efb85\") " pod="metallb-system/controller-69bbfbf88f-7pdzl" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.967491 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hqb7\" (UniqueName: \"kubernetes.io/projected/156971d4-9e01-4970-bb94-4511a2c7c94b-kube-api-access-7hqb7\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.967514 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/849240df-e1e2-40a7-8406-b1033e46b15e-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-7k58r\" (UID: \"849240df-e1e2-40a7-8406-b1033e46b15e\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7k58r" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.967533 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/419fb9f6-a8b4-4b14-bc10-179c9964f712-memberlist\") pod \"speaker-bk4kx\" (UID: \"419fb9f6-a8b4-4b14-bc10-179c9964f712\") " pod="metallb-system/speaker-bk4kx" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.967566 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/156971d4-9e01-4970-bb94-4511a2c7c94b-reloader\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.967586 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/419fb9f6-a8b4-4b14-bc10-179c9964f712-metallb-excludel2\") pod \"speaker-bk4kx\" (UID: \"419fb9f6-a8b4-4b14-bc10-179c9964f712\") " pod="metallb-system/speaker-bk4kx" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.967627 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x44wm\" (UniqueName: \"kubernetes.io/projected/849240df-e1e2-40a7-8406-b1033e46b15e-kube-api-access-x44wm\") pod \"frr-k8s-webhook-server-78b44bf5bb-7k58r\" (UID: \"849240df-e1e2-40a7-8406-b1033e46b15e\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7k58r" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.967645 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f2268c2-3ba6-4726-82ae-a80c1a5efb85-metrics-certs\") pod \"controller-69bbfbf88f-7pdzl\" (UID: \"7f2268c2-3ba6-4726-82ae-a80c1a5efb85\") " pod="metallb-system/controller-69bbfbf88f-7pdzl" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.967664 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/156971d4-9e01-4970-bb94-4511a2c7c94b-frr-conf\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.967696 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/156971d4-9e01-4970-bb94-4511a2c7c94b-metrics-certs\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.967713 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/156971d4-9e01-4970-bb94-4511a2c7c94b-metrics\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.967862 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/156971d4-9e01-4970-bb94-4511a2c7c94b-frr-sockets\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.968023 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/156971d4-9e01-4970-bb94-4511a2c7c94b-metrics\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.968102 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/156971d4-9e01-4970-bb94-4511a2c7c94b-reloader\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: E0218 19:48:46.968129 4932 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 18 19:48:46 crc kubenswrapper[4932]: E0218 19:48:46.968142 4932 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 18 19:48:46 crc kubenswrapper[4932]: E0218 19:48:46.968192 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/156971d4-9e01-4970-bb94-4511a2c7c94b-metrics-certs podName:156971d4-9e01-4970-bb94-4511a2c7c94b nodeName:}" failed. No retries permitted until 2026-02-18 19:48:47.468163549 +0000 UTC m=+891.050118394 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/156971d4-9e01-4970-bb94-4511a2c7c94b-metrics-certs") pod "frr-k8s-d4twn" (UID: "156971d4-9e01-4970-bb94-4511a2c7c94b") : secret "frr-k8s-certs-secret" not found Feb 18 19:48:46 crc kubenswrapper[4932]: E0218 19:48:46.968218 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/849240df-e1e2-40a7-8406-b1033e46b15e-cert podName:849240df-e1e2-40a7-8406-b1033e46b15e nodeName:}" failed. No retries permitted until 2026-02-18 19:48:47.46820092 +0000 UTC m=+891.050155845 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/849240df-e1e2-40a7-8406-b1033e46b15e-cert") pod "frr-k8s-webhook-server-78b44bf5bb-7k58r" (UID: "849240df-e1e2-40a7-8406-b1033e46b15e") : secret "frr-k8s-webhook-server-cert" not found Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.968273 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/156971d4-9e01-4970-bb94-4511a2c7c94b-frr-conf\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.968274 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/156971d4-9e01-4970-bb94-4511a2c7c94b-frr-startup\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.989941 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hqb7\" (UniqueName: \"kubernetes.io/projected/156971d4-9e01-4970-bb94-4511a2c7c94b-kube-api-access-7hqb7\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:46 crc kubenswrapper[4932]: I0218 19:48:46.998372 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x44wm\" (UniqueName: \"kubernetes.io/projected/849240df-e1e2-40a7-8406-b1033e46b15e-kube-api-access-x44wm\") pod \"frr-k8s-webhook-server-78b44bf5bb-7k58r\" (UID: \"849240df-e1e2-40a7-8406-b1033e46b15e\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7k58r" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.068343 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f2268c2-3ba6-4726-82ae-a80c1a5efb85-metrics-certs\") pod \"controller-69bbfbf88f-7pdzl\" (UID: \"7f2268c2-3ba6-4726-82ae-a80c1a5efb85\") " pod="metallb-system/controller-69bbfbf88f-7pdzl" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.068420 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhb6z\" (UniqueName: \"kubernetes.io/projected/7f2268c2-3ba6-4726-82ae-a80c1a5efb85-kube-api-access-vhb6z\") pod \"controller-69bbfbf88f-7pdzl\" (UID: \"7f2268c2-3ba6-4726-82ae-a80c1a5efb85\") " pod="metallb-system/controller-69bbfbf88f-7pdzl" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.068452 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s897\" (UniqueName: \"kubernetes.io/projected/419fb9f6-a8b4-4b14-bc10-179c9964f712-kube-api-access-9s897\") pod \"speaker-bk4kx\" (UID: \"419fb9f6-a8b4-4b14-bc10-179c9964f712\") " pod="metallb-system/speaker-bk4kx" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.068477 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/419fb9f6-a8b4-4b14-bc10-179c9964f712-metrics-certs\") pod \"speaker-bk4kx\" (UID: \"419fb9f6-a8b4-4b14-bc10-179c9964f712\") " pod="metallb-system/speaker-bk4kx" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.068515 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7f2268c2-3ba6-4726-82ae-a80c1a5efb85-cert\") pod \"controller-69bbfbf88f-7pdzl\" (UID: \"7f2268c2-3ba6-4726-82ae-a80c1a5efb85\") " pod="metallb-system/controller-69bbfbf88f-7pdzl" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.068554 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/419fb9f6-a8b4-4b14-bc10-179c9964f712-memberlist\") pod \"speaker-bk4kx\" (UID: \"419fb9f6-a8b4-4b14-bc10-179c9964f712\") " pod="metallb-system/speaker-bk4kx" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.068589 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/419fb9f6-a8b4-4b14-bc10-179c9964f712-metallb-excludel2\") pod \"speaker-bk4kx\" (UID: \"419fb9f6-a8b4-4b14-bc10-179c9964f712\") " pod="metallb-system/speaker-bk4kx" Feb 18 19:48:47 crc kubenswrapper[4932]: E0218 19:48:47.068755 4932 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 18 19:48:47 crc kubenswrapper[4932]: E0218 19:48:47.068838 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/419fb9f6-a8b4-4b14-bc10-179c9964f712-memberlist podName:419fb9f6-a8b4-4b14-bc10-179c9964f712 nodeName:}" failed. No retries permitted until 2026-02-18 19:48:47.568814391 +0000 UTC m=+891.150769236 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/419fb9f6-a8b4-4b14-bc10-179c9964f712-memberlist") pod "speaker-bk4kx" (UID: "419fb9f6-a8b4-4b14-bc10-179c9964f712") : secret "metallb-memberlist" not found Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.069380 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/419fb9f6-a8b4-4b14-bc10-179c9964f712-metallb-excludel2\") pod \"speaker-bk4kx\" (UID: \"419fb9f6-a8b4-4b14-bc10-179c9964f712\") " pod="metallb-system/speaker-bk4kx" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.072311 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f2268c2-3ba6-4726-82ae-a80c1a5efb85-metrics-certs\") pod \"controller-69bbfbf88f-7pdzl\" (UID: \"7f2268c2-3ba6-4726-82ae-a80c1a5efb85\") " pod="metallb-system/controller-69bbfbf88f-7pdzl" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.073089 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/419fb9f6-a8b4-4b14-bc10-179c9964f712-metrics-certs\") pod \"speaker-bk4kx\" (UID: \"419fb9f6-a8b4-4b14-bc10-179c9964f712\") " pod="metallb-system/speaker-bk4kx" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.073244 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7f2268c2-3ba6-4726-82ae-a80c1a5efb85-cert\") pod \"controller-69bbfbf88f-7pdzl\" (UID: \"7f2268c2-3ba6-4726-82ae-a80c1a5efb85\") " pod="metallb-system/controller-69bbfbf88f-7pdzl" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.084898 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhb6z\" (UniqueName: \"kubernetes.io/projected/7f2268c2-3ba6-4726-82ae-a80c1a5efb85-kube-api-access-vhb6z\") pod \"controller-69bbfbf88f-7pdzl\" (UID: \"7f2268c2-3ba6-4726-82ae-a80c1a5efb85\") " pod="metallb-system/controller-69bbfbf88f-7pdzl" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.089146 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s897\" (UniqueName: \"kubernetes.io/projected/419fb9f6-a8b4-4b14-bc10-179c9964f712-kube-api-access-9s897\") pod \"speaker-bk4kx\" (UID: \"419fb9f6-a8b4-4b14-bc10-179c9964f712\") " pod="metallb-system/speaker-bk4kx" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.222366 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-7pdzl" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.473764 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/156971d4-9e01-4970-bb94-4511a2c7c94b-metrics-certs\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.474128 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/849240df-e1e2-40a7-8406-b1033e46b15e-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-7k58r\" (UID: \"849240df-e1e2-40a7-8406-b1033e46b15e\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7k58r" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.479576 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/156971d4-9e01-4970-bb94-4511a2c7c94b-metrics-certs\") pod \"frr-k8s-d4twn\" (UID: \"156971d4-9e01-4970-bb94-4511a2c7c94b\") " pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.490614 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/849240df-e1e2-40a7-8406-b1033e46b15e-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-7k58r\" (UID: \"849240df-e1e2-40a7-8406-b1033e46b15e\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7k58r" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.575051 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/419fb9f6-a8b4-4b14-bc10-179c9964f712-memberlist\") pod \"speaker-bk4kx\" (UID: \"419fb9f6-a8b4-4b14-bc10-179c9964f712\") " pod="metallb-system/speaker-bk4kx" Feb 18 19:48:47 crc kubenswrapper[4932]: E0218 19:48:47.575271 4932 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 18 19:48:47 crc kubenswrapper[4932]: E0218 19:48:47.575341 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/419fb9f6-a8b4-4b14-bc10-179c9964f712-memberlist podName:419fb9f6-a8b4-4b14-bc10-179c9964f712 nodeName:}" failed. No retries permitted until 2026-02-18 19:48:48.57532344 +0000 UTC m=+892.157278285 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/419fb9f6-a8b4-4b14-bc10-179c9964f712-memberlist") pod "speaker-bk4kx" (UID: "419fb9f6-a8b4-4b14-bc10-179c9964f712") : secret "metallb-memberlist" not found Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.649744 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-7pdzl"] Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.703516 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7k58r" Feb 18 19:48:47 crc kubenswrapper[4932]: I0218 19:48:47.718631 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:48 crc kubenswrapper[4932]: I0218 19:48:48.136479 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-7k58r"] Feb 18 19:48:48 crc kubenswrapper[4932]: W0218 19:48:48.141996 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod849240df_e1e2_40a7_8406_b1033e46b15e.slice/crio-f0d5917f07185cdb5db7fa2607988675626ecff3f87b8999143c395fa6df534d WatchSource:0}: Error finding container f0d5917f07185cdb5db7fa2607988675626ecff3f87b8999143c395fa6df534d: Status 404 returned error can't find the container with id f0d5917f07185cdb5db7fa2607988675626ecff3f87b8999143c395fa6df534d Feb 18 19:48:48 crc kubenswrapper[4932]: I0218 19:48:48.404951 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-7pdzl" event={"ID":"7f2268c2-3ba6-4726-82ae-a80c1a5efb85","Type":"ContainerStarted","Data":"57ea6c7e98fe584a818ad3ce671ce1cacbbcb0607184fe71e60f760ae1d64b56"} Feb 18 19:48:48 crc kubenswrapper[4932]: I0218 19:48:48.405974 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-7pdzl" event={"ID":"7f2268c2-3ba6-4726-82ae-a80c1a5efb85","Type":"ContainerStarted","Data":"aaa1d35735d5e5f86f9ad528fc9931461f6e201f577b9722f935c79e0ed94193"} Feb 18 19:48:48 crc kubenswrapper[4932]: I0218 19:48:48.406005 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-7pdzl" event={"ID":"7f2268c2-3ba6-4726-82ae-a80c1a5efb85","Type":"ContainerStarted","Data":"53c2278a0990ff75b6f7455d0849f02a82e92c18da31bc8814de16e2aa0bc32d"} Feb 18 19:48:48 crc kubenswrapper[4932]: I0218 19:48:48.406033 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-7pdzl" Feb 18 19:48:48 crc kubenswrapper[4932]: I0218 19:48:48.406623 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7k58r" event={"ID":"849240df-e1e2-40a7-8406-b1033e46b15e","Type":"ContainerStarted","Data":"f0d5917f07185cdb5db7fa2607988675626ecff3f87b8999143c395fa6df534d"} Feb 18 19:48:48 crc kubenswrapper[4932]: I0218 19:48:48.409266 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d4twn" event={"ID":"156971d4-9e01-4970-bb94-4511a2c7c94b","Type":"ContainerStarted","Data":"37ec7d22c224fbf5ff880780fc8039b3c3b9899f57b4bd1ebcfca3783530792f"} Feb 18 19:48:48 crc kubenswrapper[4932]: I0218 19:48:48.425754 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-7pdzl" podStartSLOduration=2.425730441 podStartE2EDuration="2.425730441s" podCreationTimestamp="2026-02-18 19:48:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:48:48.421567998 +0000 UTC m=+892.003522853" watchObservedRunningTime="2026-02-18 19:48:48.425730441 +0000 UTC m=+892.007685286" Feb 18 19:48:48 crc kubenswrapper[4932]: I0218 19:48:48.588633 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/419fb9f6-a8b4-4b14-bc10-179c9964f712-memberlist\") pod \"speaker-bk4kx\" (UID: \"419fb9f6-a8b4-4b14-bc10-179c9964f712\") " pod="metallb-system/speaker-bk4kx" Feb 18 19:48:48 crc kubenswrapper[4932]: I0218 19:48:48.596273 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/419fb9f6-a8b4-4b14-bc10-179c9964f712-memberlist\") pod \"speaker-bk4kx\" (UID: \"419fb9f6-a8b4-4b14-bc10-179c9964f712\") " pod="metallb-system/speaker-bk4kx" Feb 18 19:48:48 crc kubenswrapper[4932]: I0218 19:48:48.680970 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-bk4kx" Feb 18 19:48:48 crc kubenswrapper[4932]: W0218 19:48:48.711034 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod419fb9f6_a8b4_4b14_bc10_179c9964f712.slice/crio-3f3c5c753a26a99a910513dab4b42f8c2171ae29b726fe74e8c1a72bbeff6199 WatchSource:0}: Error finding container 3f3c5c753a26a99a910513dab4b42f8c2171ae29b726fe74e8c1a72bbeff6199: Status 404 returned error can't find the container with id 3f3c5c753a26a99a910513dab4b42f8c2171ae29b726fe74e8c1a72bbeff6199 Feb 18 19:48:49 crc kubenswrapper[4932]: I0218 19:48:49.418096 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-bk4kx" event={"ID":"419fb9f6-a8b4-4b14-bc10-179c9964f712","Type":"ContainerStarted","Data":"f69b70bd25a1f7d1f9c092614156b9da44e2b0a333bfed842b18ff7616fccb6e"} Feb 18 19:48:49 crc kubenswrapper[4932]: I0218 19:48:49.418745 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-bk4kx" event={"ID":"419fb9f6-a8b4-4b14-bc10-179c9964f712","Type":"ContainerStarted","Data":"3f3c5c753a26a99a910513dab4b42f8c2171ae29b726fe74e8c1a72bbeff6199"} Feb 18 19:48:50 crc kubenswrapper[4932]: I0218 19:48:50.445534 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-bk4kx" event={"ID":"419fb9f6-a8b4-4b14-bc10-179c9964f712","Type":"ContainerStarted","Data":"e404da7e36125783917b375254222b6abe3e9db0c594ff801af908a0776ef417"} Feb 18 19:48:50 crc kubenswrapper[4932]: I0218 19:48:50.465897 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-bk4kx" podStartSLOduration=4.46587694 podStartE2EDuration="4.46587694s" podCreationTimestamp="2026-02-18 19:48:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:48:50.463100121 +0000 UTC m=+894.045054986" watchObservedRunningTime="2026-02-18 19:48:50.46587694 +0000 UTC m=+894.047831785" Feb 18 19:48:51 crc kubenswrapper[4932]: I0218 19:48:51.457424 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-bk4kx" Feb 18 19:48:55 crc kubenswrapper[4932]: I0218 19:48:55.484811 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7k58r" event={"ID":"849240df-e1e2-40a7-8406-b1033e46b15e","Type":"ContainerStarted","Data":"1b2fd6f421466f154317a355a0792cea51973554d96aeb7f9c6648a7cfd53fa2"} Feb 18 19:48:55 crc kubenswrapper[4932]: I0218 19:48:55.484970 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7k58r" Feb 18 19:48:55 crc kubenswrapper[4932]: I0218 19:48:55.487831 4932 generic.go:334] "Generic (PLEG): container finished" podID="156971d4-9e01-4970-bb94-4511a2c7c94b" containerID="2eaf09ca7b5c78fc0695c6ba59e607c3563bde4ae2e505cea21f3c8dea6c5c04" exitCode=0 Feb 18 19:48:55 crc kubenswrapper[4932]: I0218 19:48:55.487878 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d4twn" event={"ID":"156971d4-9e01-4970-bb94-4511a2c7c94b","Type":"ContainerDied","Data":"2eaf09ca7b5c78fc0695c6ba59e607c3563bde4ae2e505cea21f3c8dea6c5c04"} Feb 18 19:48:55 crc kubenswrapper[4932]: I0218 19:48:55.557069 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7k58r" podStartSLOduration=2.753395041 podStartE2EDuration="9.557045593s" podCreationTimestamp="2026-02-18 19:48:46 +0000 UTC" firstStartedPulling="2026-02-18 19:48:48.14548754 +0000 UTC m=+891.727442385" lastFinishedPulling="2026-02-18 19:48:54.949138092 +0000 UTC m=+898.531092937" observedRunningTime="2026-02-18 19:48:55.520643675 +0000 UTC m=+899.102598530" watchObservedRunningTime="2026-02-18 19:48:55.557045593 +0000 UTC m=+899.139000448" Feb 18 19:48:56 crc kubenswrapper[4932]: I0218 19:48:56.495018 4932 generic.go:334] "Generic (PLEG): container finished" podID="156971d4-9e01-4970-bb94-4511a2c7c94b" containerID="f02e212b15f1ced0b4f05a61ccedd52bf25ff21df803d0b005c333e7f46d8a1f" exitCode=0 Feb 18 19:48:56 crc kubenswrapper[4932]: I0218 19:48:56.495095 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d4twn" event={"ID":"156971d4-9e01-4970-bb94-4511a2c7c94b","Type":"ContainerDied","Data":"f02e212b15f1ced0b4f05a61ccedd52bf25ff21df803d0b005c333e7f46d8a1f"} Feb 18 19:48:57 crc kubenswrapper[4932]: I0218 19:48:57.227643 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-7pdzl" Feb 18 19:48:57 crc kubenswrapper[4932]: I0218 19:48:57.506140 4932 generic.go:334] "Generic (PLEG): container finished" podID="156971d4-9e01-4970-bb94-4511a2c7c94b" containerID="155f4849c322b8e7601cfb7428eb47b197ac31fcb99541433514e991b1677969" exitCode=0 Feb 18 19:48:57 crc kubenswrapper[4932]: I0218 19:48:57.506230 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d4twn" event={"ID":"156971d4-9e01-4970-bb94-4511a2c7c94b","Type":"ContainerDied","Data":"155f4849c322b8e7601cfb7428eb47b197ac31fcb99541433514e991b1677969"} Feb 18 19:48:57 crc kubenswrapper[4932]: I0218 19:48:57.605979 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:48:57 crc kubenswrapper[4932]: I0218 19:48:57.606022 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:48:58 crc kubenswrapper[4932]: I0218 19:48:58.522497 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d4twn" event={"ID":"156971d4-9e01-4970-bb94-4511a2c7c94b","Type":"ContainerStarted","Data":"93ea22518bb669e6f6e7e727f1a9229f5d14fa5f801b0b2352dfda1738534f32"} Feb 18 19:48:58 crc kubenswrapper[4932]: I0218 19:48:58.522838 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-d4twn" Feb 18 19:48:58 crc kubenswrapper[4932]: I0218 19:48:58.522853 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d4twn" event={"ID":"156971d4-9e01-4970-bb94-4511a2c7c94b","Type":"ContainerStarted","Data":"f487e31b8b6a614802ac81a4c4ef81c0a85ca454bc8d445b3b741833734d677b"} Feb 18 19:48:58 crc kubenswrapper[4932]: I0218 19:48:58.522865 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d4twn" event={"ID":"156971d4-9e01-4970-bb94-4511a2c7c94b","Type":"ContainerStarted","Data":"a852ed33248d7328f7ff0c4c1261a7ab6f80cb93e6f32ec290c33ccf3301013c"} Feb 18 19:48:58 crc kubenswrapper[4932]: I0218 19:48:58.522877 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d4twn" event={"ID":"156971d4-9e01-4970-bb94-4511a2c7c94b","Type":"ContainerStarted","Data":"f89858a65fa93214ccf31d2f903117292f248c60ba453c04b95e122e1a789aa3"} Feb 18 19:48:58 crc kubenswrapper[4932]: I0218 19:48:58.522888 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d4twn" event={"ID":"156971d4-9e01-4970-bb94-4511a2c7c94b","Type":"ContainerStarted","Data":"bba6b26b101a141c6678d8efe6846c0148f4af1f625434728d922e6b803de8ec"} Feb 18 19:48:58 crc kubenswrapper[4932]: I0218 19:48:58.522898 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d4twn" event={"ID":"156971d4-9e01-4970-bb94-4511a2c7c94b","Type":"ContainerStarted","Data":"03778760b4671427fe0bbde1aadd57f1036ae1fc48897092d97d040f2e9db7c0"} Feb 18 19:48:58 crc kubenswrapper[4932]: I0218 19:48:58.553681 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-d4twn" podStartSLOduration=5.47160886 podStartE2EDuration="12.553657617s" podCreationTimestamp="2026-02-18 19:48:46 +0000 UTC" firstStartedPulling="2026-02-18 19:48:47.851007969 +0000 UTC m=+891.432962814" lastFinishedPulling="2026-02-18 19:48:54.933056736 +0000 UTC m=+898.515011571" observedRunningTime="2026-02-18 19:48:58.549148865 +0000 UTC m=+902.131103710" watchObservedRunningTime="2026-02-18 19:48:58.553657617 +0000 UTC m=+902.135612472" Feb 18 19:49:02 crc kubenswrapper[4932]: I0218 19:49:02.719655 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-d4twn" Feb 18 19:49:02 crc kubenswrapper[4932]: I0218 19:49:02.758401 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-d4twn" Feb 18 19:49:07 crc kubenswrapper[4932]: I0218 19:49:07.709375 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7k58r" Feb 18 19:49:07 crc kubenswrapper[4932]: I0218 19:49:07.722368 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-d4twn" Feb 18 19:49:09 crc kubenswrapper[4932]: I0218 19:49:09.082889 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-bk4kx" Feb 18 19:49:12 crc kubenswrapper[4932]: I0218 19:49:12.228581 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-lglkz"] Feb 18 19:49:12 crc kubenswrapper[4932]: I0218 19:49:12.231237 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lglkz" Feb 18 19:49:12 crc kubenswrapper[4932]: I0218 19:49:12.238461 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tz5r\" (UniqueName: \"kubernetes.io/projected/ad790bf4-8b1b-43a0-b027-64ef1f97688b-kube-api-access-4tz5r\") pod \"openstack-operator-index-lglkz\" (UID: \"ad790bf4-8b1b-43a0-b027-64ef1f97688b\") " pod="openstack-operators/openstack-operator-index-lglkz" Feb 18 19:49:12 crc kubenswrapper[4932]: I0218 19:49:12.242727 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 18 19:49:12 crc kubenswrapper[4932]: I0218 19:49:12.243501 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-7jzpj" Feb 18 19:49:12 crc kubenswrapper[4932]: I0218 19:49:12.244628 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 18 19:49:12 crc kubenswrapper[4932]: I0218 19:49:12.260859 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-lglkz"] Feb 18 19:49:12 crc kubenswrapper[4932]: I0218 19:49:12.340297 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tz5r\" (UniqueName: \"kubernetes.io/projected/ad790bf4-8b1b-43a0-b027-64ef1f97688b-kube-api-access-4tz5r\") pod \"openstack-operator-index-lglkz\" (UID: \"ad790bf4-8b1b-43a0-b027-64ef1f97688b\") " pod="openstack-operators/openstack-operator-index-lglkz" Feb 18 19:49:12 crc kubenswrapper[4932]: I0218 19:49:12.360067 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tz5r\" (UniqueName: \"kubernetes.io/projected/ad790bf4-8b1b-43a0-b027-64ef1f97688b-kube-api-access-4tz5r\") pod \"openstack-operator-index-lglkz\" (UID: \"ad790bf4-8b1b-43a0-b027-64ef1f97688b\") " pod="openstack-operators/openstack-operator-index-lglkz" Feb 18 19:49:12 crc kubenswrapper[4932]: I0218 19:49:12.569468 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lglkz" Feb 18 19:49:12 crc kubenswrapper[4932]: I0218 19:49:12.791159 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-lglkz"] Feb 18 19:49:13 crc kubenswrapper[4932]: I0218 19:49:13.634867 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lglkz" event={"ID":"ad790bf4-8b1b-43a0-b027-64ef1f97688b","Type":"ContainerStarted","Data":"db23829867202d64d82ccd46bc9f4bacf0c6144e4bdd52f1eadd9a9071acff25"} Feb 18 19:49:15 crc kubenswrapper[4932]: I0218 19:49:15.598312 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-lglkz"] Feb 18 19:49:15 crc kubenswrapper[4932]: I0218 19:49:15.653649 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lglkz" event={"ID":"ad790bf4-8b1b-43a0-b027-64ef1f97688b","Type":"ContainerStarted","Data":"ce44c83e379cd1c49d1b47f0d4bf4a19efde718adcc2a41741176b867f6d89cc"} Feb 18 19:49:16 crc kubenswrapper[4932]: I0218 19:49:16.207027 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-lglkz" podStartSLOduration=2.18618398 podStartE2EDuration="4.206995722s" podCreationTimestamp="2026-02-18 19:49:12 +0000 UTC" firstStartedPulling="2026-02-18 19:49:12.800409899 +0000 UTC m=+916.382364734" lastFinishedPulling="2026-02-18 19:49:14.821221631 +0000 UTC m=+918.403176476" observedRunningTime="2026-02-18 19:49:15.683057284 +0000 UTC m=+919.265012169" watchObservedRunningTime="2026-02-18 19:49:16.206995722 +0000 UTC m=+919.788950667" Feb 18 19:49:16 crc kubenswrapper[4932]: I0218 19:49:16.213261 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-hbl5z"] Feb 18 19:49:16 crc kubenswrapper[4932]: I0218 19:49:16.214794 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hbl5z" Feb 18 19:49:16 crc kubenswrapper[4932]: I0218 19:49:16.229347 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-hbl5z"] Feb 18 19:49:16 crc kubenswrapper[4932]: I0218 19:49:16.399881 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brzzd\" (UniqueName: \"kubernetes.io/projected/80acb08c-9e7c-49a6-908f-83d3b958e7b2-kube-api-access-brzzd\") pod \"openstack-operator-index-hbl5z\" (UID: \"80acb08c-9e7c-49a6-908f-83d3b958e7b2\") " pod="openstack-operators/openstack-operator-index-hbl5z" Feb 18 19:49:16 crc kubenswrapper[4932]: I0218 19:49:16.501258 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brzzd\" (UniqueName: \"kubernetes.io/projected/80acb08c-9e7c-49a6-908f-83d3b958e7b2-kube-api-access-brzzd\") pod \"openstack-operator-index-hbl5z\" (UID: \"80acb08c-9e7c-49a6-908f-83d3b958e7b2\") " pod="openstack-operators/openstack-operator-index-hbl5z" Feb 18 19:49:16 crc kubenswrapper[4932]: I0218 19:49:16.530581 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brzzd\" (UniqueName: \"kubernetes.io/projected/80acb08c-9e7c-49a6-908f-83d3b958e7b2-kube-api-access-brzzd\") pod \"openstack-operator-index-hbl5z\" (UID: \"80acb08c-9e7c-49a6-908f-83d3b958e7b2\") " pod="openstack-operators/openstack-operator-index-hbl5z" Feb 18 19:49:16 crc kubenswrapper[4932]: I0218 19:49:16.539839 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hbl5z" Feb 18 19:49:16 crc kubenswrapper[4932]: I0218 19:49:16.661514 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-lglkz" podUID="ad790bf4-8b1b-43a0-b027-64ef1f97688b" containerName="registry-server" containerID="cri-o://ce44c83e379cd1c49d1b47f0d4bf4a19efde718adcc2a41741176b867f6d89cc" gracePeriod=2 Feb 18 19:49:17 crc kubenswrapper[4932]: I0218 19:49:17.024207 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lglkz" Feb 18 19:49:17 crc kubenswrapper[4932]: I0218 19:49:17.031545 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-hbl5z"] Feb 18 19:49:17 crc kubenswrapper[4932]: I0218 19:49:17.212216 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tz5r\" (UniqueName: \"kubernetes.io/projected/ad790bf4-8b1b-43a0-b027-64ef1f97688b-kube-api-access-4tz5r\") pod \"ad790bf4-8b1b-43a0-b027-64ef1f97688b\" (UID: \"ad790bf4-8b1b-43a0-b027-64ef1f97688b\") " Feb 18 19:49:17 crc kubenswrapper[4932]: I0218 19:49:17.217709 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad790bf4-8b1b-43a0-b027-64ef1f97688b-kube-api-access-4tz5r" (OuterVolumeSpecName: "kube-api-access-4tz5r") pod "ad790bf4-8b1b-43a0-b027-64ef1f97688b" (UID: "ad790bf4-8b1b-43a0-b027-64ef1f97688b"). InnerVolumeSpecName "kube-api-access-4tz5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:49:17 crc kubenswrapper[4932]: I0218 19:49:17.313868 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tz5r\" (UniqueName: \"kubernetes.io/projected/ad790bf4-8b1b-43a0-b027-64ef1f97688b-kube-api-access-4tz5r\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:17 crc kubenswrapper[4932]: I0218 19:49:17.672084 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hbl5z" event={"ID":"80acb08c-9e7c-49a6-908f-83d3b958e7b2","Type":"ContainerStarted","Data":"922654891b19f144830fd6ae2250c6ada163262b8811bdad5d0544137968f511"} Feb 18 19:49:17 crc kubenswrapper[4932]: I0218 19:49:17.672146 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hbl5z" event={"ID":"80acb08c-9e7c-49a6-908f-83d3b958e7b2","Type":"ContainerStarted","Data":"13c4361fa8e7450a59cec819f105c251cfa919ada17f7c5c3fbadaf179e41f71"} Feb 18 19:49:17 crc kubenswrapper[4932]: I0218 19:49:17.674630 4932 generic.go:334] "Generic (PLEG): container finished" podID="ad790bf4-8b1b-43a0-b027-64ef1f97688b" containerID="ce44c83e379cd1c49d1b47f0d4bf4a19efde718adcc2a41741176b867f6d89cc" exitCode=0 Feb 18 19:49:17 crc kubenswrapper[4932]: I0218 19:49:17.674721 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lglkz" Feb 18 19:49:17 crc kubenswrapper[4932]: I0218 19:49:17.674757 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lglkz" event={"ID":"ad790bf4-8b1b-43a0-b027-64ef1f97688b","Type":"ContainerDied","Data":"ce44c83e379cd1c49d1b47f0d4bf4a19efde718adcc2a41741176b867f6d89cc"} Feb 18 19:49:17 crc kubenswrapper[4932]: I0218 19:49:17.674827 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lglkz" event={"ID":"ad790bf4-8b1b-43a0-b027-64ef1f97688b","Type":"ContainerDied","Data":"db23829867202d64d82ccd46bc9f4bacf0c6144e4bdd52f1eadd9a9071acff25"} Feb 18 19:49:17 crc kubenswrapper[4932]: I0218 19:49:17.674860 4932 scope.go:117] "RemoveContainer" containerID="ce44c83e379cd1c49d1b47f0d4bf4a19efde718adcc2a41741176b867f6d89cc" Feb 18 19:49:17 crc kubenswrapper[4932]: I0218 19:49:17.697335 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-hbl5z" podStartSLOduration=1.6432701079999998 podStartE2EDuration="1.697319861s" podCreationTimestamp="2026-02-18 19:49:16 +0000 UTC" firstStartedPulling="2026-02-18 19:49:17.02531358 +0000 UTC m=+920.607268425" lastFinishedPulling="2026-02-18 19:49:17.079363323 +0000 UTC m=+920.661318178" observedRunningTime="2026-02-18 19:49:17.695488096 +0000 UTC m=+921.277442971" watchObservedRunningTime="2026-02-18 19:49:17.697319861 +0000 UTC m=+921.279274716" Feb 18 19:49:17 crc kubenswrapper[4932]: I0218 19:49:17.701783 4932 scope.go:117] "RemoveContainer" containerID="ce44c83e379cd1c49d1b47f0d4bf4a19efde718adcc2a41741176b867f6d89cc" Feb 18 19:49:17 crc kubenswrapper[4932]: E0218 19:49:17.702431 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce44c83e379cd1c49d1b47f0d4bf4a19efde718adcc2a41741176b867f6d89cc\": container with ID starting with ce44c83e379cd1c49d1b47f0d4bf4a19efde718adcc2a41741176b867f6d89cc not found: ID does not exist" containerID="ce44c83e379cd1c49d1b47f0d4bf4a19efde718adcc2a41741176b867f6d89cc" Feb 18 19:49:17 crc kubenswrapper[4932]: I0218 19:49:17.702473 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce44c83e379cd1c49d1b47f0d4bf4a19efde718adcc2a41741176b867f6d89cc"} err="failed to get container status \"ce44c83e379cd1c49d1b47f0d4bf4a19efde718adcc2a41741176b867f6d89cc\": rpc error: code = NotFound desc = could not find container \"ce44c83e379cd1c49d1b47f0d4bf4a19efde718adcc2a41741176b867f6d89cc\": container with ID starting with ce44c83e379cd1c49d1b47f0d4bf4a19efde718adcc2a41741176b867f6d89cc not found: ID does not exist" Feb 18 19:49:17 crc kubenswrapper[4932]: I0218 19:49:17.721442 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-lglkz"] Feb 18 19:49:17 crc kubenswrapper[4932]: I0218 19:49:17.729400 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-lglkz"] Feb 18 19:49:19 crc kubenswrapper[4932]: I0218 19:49:19.193558 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad790bf4-8b1b-43a0-b027-64ef1f97688b" path="/var/lib/kubelet/pods/ad790bf4-8b1b-43a0-b027-64ef1f97688b/volumes" Feb 18 19:49:26 crc kubenswrapper[4932]: I0218 19:49:26.540880 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-hbl5z" Feb 18 19:49:26 crc kubenswrapper[4932]: I0218 19:49:26.541545 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-hbl5z" Feb 18 19:49:26 crc kubenswrapper[4932]: I0218 19:49:26.586079 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-hbl5z" Feb 18 19:49:26 crc kubenswrapper[4932]: I0218 19:49:26.774834 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-hbl5z" Feb 18 19:49:27 crc kubenswrapper[4932]: I0218 19:49:27.605813 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:49:27 crc kubenswrapper[4932]: I0218 19:49:27.606876 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:49:29 crc kubenswrapper[4932]: I0218 19:49:29.315264 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2"] Feb 18 19:49:29 crc kubenswrapper[4932]: E0218 19:49:29.315694 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad790bf4-8b1b-43a0-b027-64ef1f97688b" containerName="registry-server" Feb 18 19:49:29 crc kubenswrapper[4932]: I0218 19:49:29.315713 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad790bf4-8b1b-43a0-b027-64ef1f97688b" containerName="registry-server" Feb 18 19:49:29 crc kubenswrapper[4932]: I0218 19:49:29.315982 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad790bf4-8b1b-43a0-b027-64ef1f97688b" containerName="registry-server" Feb 18 19:49:29 crc kubenswrapper[4932]: I0218 19:49:29.317826 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" Feb 18 19:49:29 crc kubenswrapper[4932]: I0218 19:49:29.324106 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-qsshk" Feb 18 19:49:29 crc kubenswrapper[4932]: I0218 19:49:29.332727 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2"] Feb 18 19:49:29 crc kubenswrapper[4932]: I0218 19:49:29.483491 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-util\") pod \"cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2\" (UID: \"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762\") " pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" Feb 18 19:49:29 crc kubenswrapper[4932]: I0218 19:49:29.483711 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-bundle\") pod \"cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2\" (UID: \"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762\") " pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" Feb 18 19:49:29 crc kubenswrapper[4932]: I0218 19:49:29.483836 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtlsz\" (UniqueName: \"kubernetes.io/projected/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-kube-api-access-rtlsz\") pod \"cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2\" (UID: \"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762\") " pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" Feb 18 19:49:29 crc kubenswrapper[4932]: I0218 19:49:29.586477 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-bundle\") pod \"cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2\" (UID: \"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762\") " pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" Feb 18 19:49:29 crc kubenswrapper[4932]: I0218 19:49:29.587974 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-bundle\") pod \"cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2\" (UID: \"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762\") " pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" Feb 18 19:49:29 crc kubenswrapper[4932]: I0218 19:49:29.587986 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtlsz\" (UniqueName: \"kubernetes.io/projected/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-kube-api-access-rtlsz\") pod \"cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2\" (UID: \"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762\") " pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" Feb 18 19:49:29 crc kubenswrapper[4932]: I0218 19:49:29.588274 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-util\") pod \"cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2\" (UID: \"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762\") " pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" Feb 18 19:49:29 crc kubenswrapper[4932]: I0218 19:49:29.588930 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-util\") pod \"cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2\" (UID: \"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762\") " pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" Feb 18 19:49:29 crc kubenswrapper[4932]: I0218 19:49:29.625602 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtlsz\" (UniqueName: \"kubernetes.io/projected/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-kube-api-access-rtlsz\") pod \"cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2\" (UID: \"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762\") " pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" Feb 18 19:49:29 crc kubenswrapper[4932]: I0218 19:49:29.662950 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" Feb 18 19:49:30 crc kubenswrapper[4932]: I0218 19:49:30.120948 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2"] Feb 18 19:49:30 crc kubenswrapper[4932]: I0218 19:49:30.785052 4932 generic.go:334] "Generic (PLEG): container finished" podID="ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762" containerID="02fea0e719048f7f6b29ca81cc6bf4132bc9ef6b1f16295e859f28e8b18e1563" exitCode=0 Feb 18 19:49:30 crc kubenswrapper[4932]: I0218 19:49:30.785137 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" event={"ID":"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762","Type":"ContainerDied","Data":"02fea0e719048f7f6b29ca81cc6bf4132bc9ef6b1f16295e859f28e8b18e1563"} Feb 18 19:49:30 crc kubenswrapper[4932]: I0218 19:49:30.785215 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" event={"ID":"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762","Type":"ContainerStarted","Data":"16d0afd0735b29154ca31e96cfd81cb1ab5e28772ab683f4f174d2e10050e07b"} Feb 18 19:49:31 crc kubenswrapper[4932]: I0218 19:49:31.797483 4932 generic.go:334] "Generic (PLEG): container finished" podID="ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762" containerID="2da580f34eeec2c50f4f1bb64ba727ae9b68d89234d3f2d58a2e49e7b095b8e6" exitCode=0 Feb 18 19:49:31 crc kubenswrapper[4932]: I0218 19:49:31.797968 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" event={"ID":"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762","Type":"ContainerDied","Data":"2da580f34eeec2c50f4f1bb64ba727ae9b68d89234d3f2d58a2e49e7b095b8e6"} Feb 18 19:49:32 crc kubenswrapper[4932]: I0218 19:49:32.808630 4932 generic.go:334] "Generic (PLEG): container finished" podID="ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762" containerID="cf995c532a3a5612c7ab11dad314f25deca245da27ce35c1fd72ae7f294b024f" exitCode=0 Feb 18 19:49:32 crc kubenswrapper[4932]: I0218 19:49:32.808750 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" event={"ID":"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762","Type":"ContainerDied","Data":"cf995c532a3a5612c7ab11dad314f25deca245da27ce35c1fd72ae7f294b024f"} Feb 18 19:49:34 crc kubenswrapper[4932]: I0218 19:49:34.166314 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" Feb 18 19:49:34 crc kubenswrapper[4932]: I0218 19:49:34.356027 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-util\") pod \"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762\" (UID: \"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762\") " Feb 18 19:49:34 crc kubenswrapper[4932]: I0218 19:49:34.356552 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-bundle\") pod \"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762\" (UID: \"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762\") " Feb 18 19:49:34 crc kubenswrapper[4932]: I0218 19:49:34.356693 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtlsz\" (UniqueName: \"kubernetes.io/projected/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-kube-api-access-rtlsz\") pod \"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762\" (UID: \"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762\") " Feb 18 19:49:34 crc kubenswrapper[4932]: I0218 19:49:34.357359 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-bundle" (OuterVolumeSpecName: "bundle") pod "ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762" (UID: "ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:49:34 crc kubenswrapper[4932]: I0218 19:49:34.363147 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-kube-api-access-rtlsz" (OuterVolumeSpecName: "kube-api-access-rtlsz") pod "ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762" (UID: "ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762"). InnerVolumeSpecName "kube-api-access-rtlsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:49:34 crc kubenswrapper[4932]: I0218 19:49:34.381525 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-util" (OuterVolumeSpecName: "util") pod "ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762" (UID: "ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:49:34 crc kubenswrapper[4932]: I0218 19:49:34.458794 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtlsz\" (UniqueName: \"kubernetes.io/projected/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-kube-api-access-rtlsz\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:34 crc kubenswrapper[4932]: I0218 19:49:34.458838 4932 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-util\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:34 crc kubenswrapper[4932]: I0218 19:49:34.458855 4932 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:34 crc kubenswrapper[4932]: I0218 19:49:34.829821 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" event={"ID":"ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762","Type":"ContainerDied","Data":"16d0afd0735b29154ca31e96cfd81cb1ab5e28772ab683f4f174d2e10050e07b"} Feb 18 19:49:34 crc kubenswrapper[4932]: I0218 19:49:34.829886 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16d0afd0735b29154ca31e96cfd81cb1ab5e28772ab683f4f174d2e10050e07b" Feb 18 19:49:34 crc kubenswrapper[4932]: I0218 19:49:34.829971 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cf30b60e4cbd5fa77a4b716a30a24c081a8eb399f8cab4cb05b7845e70sz8v2" Feb 18 19:49:36 crc kubenswrapper[4932]: I0218 19:49:36.505167 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-54f996c4d6-kzsqr"] Feb 18 19:49:36 crc kubenswrapper[4932]: E0218 19:49:36.505545 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762" containerName="util" Feb 18 19:49:36 crc kubenswrapper[4932]: I0218 19:49:36.505565 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762" containerName="util" Feb 18 19:49:36 crc kubenswrapper[4932]: E0218 19:49:36.505598 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762" containerName="extract" Feb 18 19:49:36 crc kubenswrapper[4932]: I0218 19:49:36.505609 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762" containerName="extract" Feb 18 19:49:36 crc kubenswrapper[4932]: E0218 19:49:36.505626 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762" containerName="pull" Feb 18 19:49:36 crc kubenswrapper[4932]: I0218 19:49:36.505638 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762" containerName="pull" Feb 18 19:49:36 crc kubenswrapper[4932]: I0218 19:49:36.505837 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecf87a8f-5597-4fa2-b8ab-7ad4f22b7762" containerName="extract" Feb 18 19:49:36 crc kubenswrapper[4932]: I0218 19:49:36.506526 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-54f996c4d6-kzsqr" Feb 18 19:49:36 crc kubenswrapper[4932]: I0218 19:49:36.519019 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-zd4xf" Feb 18 19:49:36 crc kubenswrapper[4932]: I0218 19:49:36.529153 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-54f996c4d6-kzsqr"] Feb 18 19:49:36 crc kubenswrapper[4932]: I0218 19:49:36.686733 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pmft\" (UniqueName: \"kubernetes.io/projected/ca42d108-3e4a-4b0a-85fb-4bcb1699d8ce-kube-api-access-2pmft\") pod \"openstack-operator-controller-init-54f996c4d6-kzsqr\" (UID: \"ca42d108-3e4a-4b0a-85fb-4bcb1699d8ce\") " pod="openstack-operators/openstack-operator-controller-init-54f996c4d6-kzsqr" Feb 18 19:49:36 crc kubenswrapper[4932]: I0218 19:49:36.789041 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pmft\" (UniqueName: \"kubernetes.io/projected/ca42d108-3e4a-4b0a-85fb-4bcb1699d8ce-kube-api-access-2pmft\") pod \"openstack-operator-controller-init-54f996c4d6-kzsqr\" (UID: \"ca42d108-3e4a-4b0a-85fb-4bcb1699d8ce\") " pod="openstack-operators/openstack-operator-controller-init-54f996c4d6-kzsqr" Feb 18 19:49:36 crc kubenswrapper[4932]: I0218 19:49:36.817485 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pmft\" (UniqueName: \"kubernetes.io/projected/ca42d108-3e4a-4b0a-85fb-4bcb1699d8ce-kube-api-access-2pmft\") pod \"openstack-operator-controller-init-54f996c4d6-kzsqr\" (UID: \"ca42d108-3e4a-4b0a-85fb-4bcb1699d8ce\") " pod="openstack-operators/openstack-operator-controller-init-54f996c4d6-kzsqr" Feb 18 19:49:36 crc kubenswrapper[4932]: I0218 19:49:36.822058 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-54f996c4d6-kzsqr" Feb 18 19:49:37 crc kubenswrapper[4932]: I0218 19:49:37.330565 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-54f996c4d6-kzsqr"] Feb 18 19:49:37 crc kubenswrapper[4932]: I0218 19:49:37.851924 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-54f996c4d6-kzsqr" event={"ID":"ca42d108-3e4a-4b0a-85fb-4bcb1699d8ce","Type":"ContainerStarted","Data":"f4287e62498162809da5a27b02dc8f14aab62b8c45c4680dffc3d74a395b7405"} Feb 18 19:49:41 crc kubenswrapper[4932]: I0218 19:49:41.876528 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-54f996c4d6-kzsqr" event={"ID":"ca42d108-3e4a-4b0a-85fb-4bcb1699d8ce","Type":"ContainerStarted","Data":"5ade3468ee541d4f66060329e19913f9fb5c73b097fadc0a004c5f9581e7b18f"} Feb 18 19:49:41 crc kubenswrapper[4932]: I0218 19:49:41.877037 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-54f996c4d6-kzsqr" Feb 18 19:49:41 crc kubenswrapper[4932]: I0218 19:49:41.907382 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-54f996c4d6-kzsqr" podStartSLOduration=2.253425285 podStartE2EDuration="5.907349077s" podCreationTimestamp="2026-02-18 19:49:36 +0000 UTC" firstStartedPulling="2026-02-18 19:49:37.337853398 +0000 UTC m=+940.919808253" lastFinishedPulling="2026-02-18 19:49:40.9917772 +0000 UTC m=+944.573732045" observedRunningTime="2026-02-18 19:49:41.9034371 +0000 UTC m=+945.485391955" watchObservedRunningTime="2026-02-18 19:49:41.907349077 +0000 UTC m=+945.489303932" Feb 18 19:49:46 crc kubenswrapper[4932]: I0218 19:49:46.824869 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-54f996c4d6-kzsqr" Feb 18 19:49:57 crc kubenswrapper[4932]: I0218 19:49:57.606426 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:49:57 crc kubenswrapper[4932]: I0218 19:49:57.606861 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:49:57 crc kubenswrapper[4932]: I0218 19:49:57.606908 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:49:57 crc kubenswrapper[4932]: I0218 19:49:57.607485 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0796b82991176676a1533452d61ed93202733b7f85192cab295504d343f7c992"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:49:57 crc kubenswrapper[4932]: I0218 19:49:57.607550 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://0796b82991176676a1533452d61ed93202733b7f85192cab295504d343f7c992" gracePeriod=600 Feb 18 19:49:58 crc kubenswrapper[4932]: I0218 19:49:58.009487 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="0796b82991176676a1533452d61ed93202733b7f85192cab295504d343f7c992" exitCode=0 Feb 18 19:49:58 crc kubenswrapper[4932]: I0218 19:49:58.009549 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"0796b82991176676a1533452d61ed93202733b7f85192cab295504d343f7c992"} Feb 18 19:49:58 crc kubenswrapper[4932]: I0218 19:49:58.009605 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"435f6d4431c63fe1b1d0a709b03d86681659a5d37fb618d6ab36ba1010fce349"} Feb 18 19:49:58 crc kubenswrapper[4932]: I0218 19:49:58.009642 4932 scope.go:117] "RemoveContainer" containerID="f3b543e6ec63bdf78c858f95870e024438d65d986dd0f72b674fc74756af06be" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.772706 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-c4b7d6946-clwts"] Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.778545 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-clwts" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.782149 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-bz8sq" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.788633 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57746b5ff9-56fbf"] Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.789543 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56fbf" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.794835 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-psps4" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.794902 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-c4b7d6946-clwts"] Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.806913 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57746b5ff9-56fbf"] Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.828045 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-55cc45767f-mp2bb"] Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.828761 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-mp2bb" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.836586 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-s8t4r" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.843101 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-55cc45767f-mp2bb"] Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.861231 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-68c6d499cb-b46xh"] Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.862284 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-b46xh" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.877257 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-r9dcb" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.916860 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-9595d6797-7ssxs"] Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.917768 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-9595d6797-7ssxs" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.925765 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-68sgs" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.926567 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68c6d499cb-b46xh"] Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.927755 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxqlp\" (UniqueName: \"kubernetes.io/projected/33f4dcd6-0eea-40f3-9968-458594d82013-kube-api-access-dxqlp\") pod \"designate-operator-controller-manager-55cc45767f-mp2bb\" (UID: \"33f4dcd6-0eea-40f3-9968-458594d82013\") " pod="openstack-operators/designate-operator-controller-manager-55cc45767f-mp2bb" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.927790 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktdsl\" (UniqueName: \"kubernetes.io/projected/59af7bc1-7774-4102-ae6c-2d7f820d3b93-kube-api-access-ktdsl\") pod \"cinder-operator-controller-manager-57746b5ff9-56fbf\" (UID: \"59af7bc1-7774-4102-ae6c-2d7f820d3b93\") " pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56fbf" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.927822 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwztk\" (UniqueName: \"kubernetes.io/projected/4f286c1e-d207-47a0-86be-6711856071a7-kube-api-access-lwztk\") pod \"glance-operator-controller-manager-68c6d499cb-b46xh\" (UID: \"4f286c1e-d207-47a0-86be-6711856071a7\") " pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-b46xh" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.927844 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blhxn\" (UniqueName: \"kubernetes.io/projected/d03b5e78-a45c-49aa-8915-336be03c8c94-kube-api-access-blhxn\") pod \"barbican-operator-controller-manager-c4b7d6946-clwts\" (UID: \"d03b5e78-a45c-49aa-8915-336be03c8c94\") " pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-clwts" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.950845 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-9595d6797-7ssxs"] Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.955227 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-54fb488b88-4m7xr"] Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.956235 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-4m7xr" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.959026 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-zspnl" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.968342 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-54fb488b88-4m7xr"] Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.979219 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7"] Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.980067 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.985581 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 18 19:50:06 crc kubenswrapper[4932]: I0218 19:50:06.985634 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-f6sbm" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.000314 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.023463 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6494cdbf8f-qqxpn"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.024234 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-qqxpn" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.028822 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6494cdbf8f-qqxpn"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.028884 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-ch744" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.029969 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxqlp\" (UniqueName: \"kubernetes.io/projected/33f4dcd6-0eea-40f3-9968-458594d82013-kube-api-access-dxqlp\") pod \"designate-operator-controller-manager-55cc45767f-mp2bb\" (UID: \"33f4dcd6-0eea-40f3-9968-458594d82013\") " pod="openstack-operators/designate-operator-controller-manager-55cc45767f-mp2bb" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.030096 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktdsl\" (UniqueName: \"kubernetes.io/projected/59af7bc1-7774-4102-ae6c-2d7f820d3b93-kube-api-access-ktdsl\") pod \"cinder-operator-controller-manager-57746b5ff9-56fbf\" (UID: \"59af7bc1-7774-4102-ae6c-2d7f820d3b93\") " pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56fbf" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.030204 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5469v\" (UniqueName: \"kubernetes.io/projected/a0fe77a1-c4a7-422f-b7c2-3062c2af1393-kube-api-access-5469v\") pod \"heat-operator-controller-manager-9595d6797-7ssxs\" (UID: \"a0fe77a1-c4a7-422f-b7c2-3062c2af1393\") " pod="openstack-operators/heat-operator-controller-manager-9595d6797-7ssxs" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.030295 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwztk\" (UniqueName: \"kubernetes.io/projected/4f286c1e-d207-47a0-86be-6711856071a7-kube-api-access-lwztk\") pod \"glance-operator-controller-manager-68c6d499cb-b46xh\" (UID: \"4f286c1e-d207-47a0-86be-6711856071a7\") " pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-b46xh" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.030392 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blhxn\" (UniqueName: \"kubernetes.io/projected/d03b5e78-a45c-49aa-8915-336be03c8c94-kube-api-access-blhxn\") pod \"barbican-operator-controller-manager-c4b7d6946-clwts\" (UID: \"d03b5e78-a45c-49aa-8915-336be03c8c94\") " pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-clwts" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.061952 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwztk\" (UniqueName: \"kubernetes.io/projected/4f286c1e-d207-47a0-86be-6711856071a7-kube-api-access-lwztk\") pod \"glance-operator-controller-manager-68c6d499cb-b46xh\" (UID: \"4f286c1e-d207-47a0-86be-6711856071a7\") " pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-b46xh" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.063095 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blhxn\" (UniqueName: \"kubernetes.io/projected/d03b5e78-a45c-49aa-8915-336be03c8c94-kube-api-access-blhxn\") pod \"barbican-operator-controller-manager-c4b7d6946-clwts\" (UID: \"d03b5e78-a45c-49aa-8915-336be03c8c94\") " pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-clwts" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.068585 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktdsl\" (UniqueName: \"kubernetes.io/projected/59af7bc1-7774-4102-ae6c-2d7f820d3b93-kube-api-access-ktdsl\") pod \"cinder-operator-controller-manager-57746b5ff9-56fbf\" (UID: \"59af7bc1-7774-4102-ae6c-2d7f820d3b93\") " pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56fbf" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.089627 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxqlp\" (UniqueName: \"kubernetes.io/projected/33f4dcd6-0eea-40f3-9968-458594d82013-kube-api-access-dxqlp\") pod \"designate-operator-controller-manager-55cc45767f-mp2bb\" (UID: \"33f4dcd6-0eea-40f3-9968-458594d82013\") " pod="openstack-operators/designate-operator-controller-manager-55cc45767f-mp2bb" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.108423 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-clwts" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.119696 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56fbf" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.138833 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgvpc\" (UniqueName: \"kubernetes.io/projected/376e77a5-0e6f-4999-a037-96154984442f-kube-api-access-sgvpc\") pod \"infra-operator-controller-manager-66d6b5f488-dv4j7\" (UID: \"376e77a5-0e6f-4999-a037-96154984442f\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.139919 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxs72\" (UniqueName: \"kubernetes.io/projected/3967efad-3234-435e-b755-f684ffd74918-kube-api-access-vxs72\") pod \"horizon-operator-controller-manager-54fb488b88-4m7xr\" (UID: \"3967efad-3234-435e-b755-f684ffd74918\") " pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-4m7xr" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.140810 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24slc\" (UniqueName: \"kubernetes.io/projected/6b690eeb-2e37-49d8-9f44-9ca086aa2f00-kube-api-access-24slc\") pod \"ironic-operator-controller-manager-6494cdbf8f-qqxpn\" (UID: \"6b690eeb-2e37-49d8-9f44-9ca086aa2f00\") " pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-qqxpn" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.140931 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5469v\" (UniqueName: \"kubernetes.io/projected/a0fe77a1-c4a7-422f-b7c2-3062c2af1393-kube-api-access-5469v\") pod \"heat-operator-controller-manager-9595d6797-7ssxs\" (UID: \"a0fe77a1-c4a7-422f-b7c2-3062c2af1393\") " pod="openstack-operators/heat-operator-controller-manager-9595d6797-7ssxs" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.141079 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert\") pod \"infra-operator-controller-manager-66d6b5f488-dv4j7\" (UID: \"376e77a5-0e6f-4999-a037-96154984442f\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.141877 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-mp2bb" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.181878 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-96fff9cb8-brmw7"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.182242 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-b46xh" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.188332 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-brmw7" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.201035 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5469v\" (UniqueName: \"kubernetes.io/projected/a0fe77a1-c4a7-422f-b7c2-3062c2af1393-kube-api-access-5469v\") pod \"heat-operator-controller-manager-9595d6797-7ssxs\" (UID: \"a0fe77a1-c4a7-422f-b7c2-3062c2af1393\") " pod="openstack-operators/heat-operator-controller-manager-9595d6797-7ssxs" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.232118 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-x948b" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.233434 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-9595d6797-7ssxs" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.242555 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert\") pod \"infra-operator-controller-manager-66d6b5f488-dv4j7\" (UID: \"376e77a5-0e6f-4999-a037-96154984442f\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.242655 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgvpc\" (UniqueName: \"kubernetes.io/projected/376e77a5-0e6f-4999-a037-96154984442f-kube-api-access-sgvpc\") pod \"infra-operator-controller-manager-66d6b5f488-dv4j7\" (UID: \"376e77a5-0e6f-4999-a037-96154984442f\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.242728 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxs72\" (UniqueName: \"kubernetes.io/projected/3967efad-3234-435e-b755-f684ffd74918-kube-api-access-vxs72\") pod \"horizon-operator-controller-manager-54fb488b88-4m7xr\" (UID: \"3967efad-3234-435e-b755-f684ffd74918\") " pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-4m7xr" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.242758 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv8bm\" (UniqueName: \"kubernetes.io/projected/ffff0e6b-64e2-499f-8296-f374c5d62450-kube-api-access-wv8bm\") pod \"manila-operator-controller-manager-96fff9cb8-brmw7\" (UID: \"ffff0e6b-64e2-499f-8296-f374c5d62450\") " pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-brmw7" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.242787 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24slc\" (UniqueName: \"kubernetes.io/projected/6b690eeb-2e37-49d8-9f44-9ca086aa2f00-kube-api-access-24slc\") pod \"ironic-operator-controller-manager-6494cdbf8f-qqxpn\" (UID: \"6b690eeb-2e37-49d8-9f44-9ca086aa2f00\") " pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-qqxpn" Feb 18 19:50:07 crc kubenswrapper[4932]: E0218 19:50:07.243411 4932 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:50:07 crc kubenswrapper[4932]: E0218 19:50:07.243461 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert podName:376e77a5-0e6f-4999-a037-96154984442f nodeName:}" failed. No retries permitted until 2026-02-18 19:50:07.743445303 +0000 UTC m=+971.325400148 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert") pod "infra-operator-controller-manager-66d6b5f488-dv4j7" (UID: "376e77a5-0e6f-4999-a037-96154984442f") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.278847 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxs72\" (UniqueName: \"kubernetes.io/projected/3967efad-3234-435e-b755-f684ffd74918-kube-api-access-vxs72\") pod \"horizon-operator-controller-manager-54fb488b88-4m7xr\" (UID: \"3967efad-3234-435e-b755-f684ffd74918\") " pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-4m7xr" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.279685 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24slc\" (UniqueName: \"kubernetes.io/projected/6b690eeb-2e37-49d8-9f44-9ca086aa2f00-kube-api-access-24slc\") pod \"ironic-operator-controller-manager-6494cdbf8f-qqxpn\" (UID: \"6b690eeb-2e37-49d8-9f44-9ca086aa2f00\") " pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-qqxpn" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.284048 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgvpc\" (UniqueName: \"kubernetes.io/projected/376e77a5-0e6f-4999-a037-96154984442f-kube-api-access-sgvpc\") pod \"infra-operator-controller-manager-66d6b5f488-dv4j7\" (UID: \"376e77a5-0e6f-4999-a037-96154984442f\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.284433 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c78d668d5-m86tn"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.285186 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-96fff9cb8-brmw7"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.285203 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c78d668d5-m86tn"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.285338 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-m86tn" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.306403 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-v6p68" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.307667 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54967dbbdf-4hrhw"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.308440 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-4hrhw" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.318316 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-9wg25" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.318951 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5ddd85db87-wt2rd"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.319714 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wt2rd" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.325009 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-2kflp" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.336641 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66997756f6-s8b9p"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.337548 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-s8b9p" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.342229 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54967dbbdf-4hrhw"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.343835 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv8bm\" (UniqueName: \"kubernetes.io/projected/ffff0e6b-64e2-499f-8296-f374c5d62450-kube-api-access-wv8bm\") pod \"manila-operator-controller-manager-96fff9cb8-brmw7\" (UID: \"ffff0e6b-64e2-499f-8296-f374c5d62450\") " pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-brmw7" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.349816 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5ddd85db87-wt2rd"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.353841 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-fb6fl" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.369733 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66997756f6-s8b9p"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.377801 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv8bm\" (UniqueName: \"kubernetes.io/projected/ffff0e6b-64e2-499f-8296-f374c5d62450-kube-api-access-wv8bm\") pod \"manila-operator-controller-manager-96fff9cb8-brmw7\" (UID: \"ffff0e6b-64e2-499f-8296-f374c5d62450\") " pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-brmw7" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.385814 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-745bbbd77b-jpncb"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.387015 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-jpncb" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.389066 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-2m5xs" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.419130 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-745bbbd77b-jpncb"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.419892 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-qqxpn" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.442903 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.443891 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.445678 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkvb9\" (UniqueName: \"kubernetes.io/projected/9647c082-6b36-4f38-b1fb-663f095997e9-kube-api-access-lkvb9\") pod \"keystone-operator-controller-manager-6c78d668d5-m86tn\" (UID: \"9647c082-6b36-4f38-b1fb-663f095997e9\") " pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-m86tn" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.445740 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvp48\" (UniqueName: \"kubernetes.io/projected/4eb6df58-4273-41ac-8d6d-34d04a30adef-kube-api-access-qvp48\") pod \"neutron-operator-controller-manager-54967dbbdf-4hrhw\" (UID: \"4eb6df58-4273-41ac-8d6d-34d04a30adef\") " pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-4hrhw" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.445760 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94d6r\" (UniqueName: \"kubernetes.io/projected/4f1057b4-de48-4123-986c-795f9957899a-kube-api-access-94d6r\") pod \"nova-operator-controller-manager-5ddd85db87-wt2rd\" (UID: \"4f1057b4-de48-4123-986c-795f9957899a\") " pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wt2rd" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.445787 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvz89\" (UniqueName: \"kubernetes.io/projected/27b507e8-a4b3-49cb-bef2-85a319a10257-kube-api-access-nvz89\") pod \"mariadb-operator-controller-manager-66997756f6-s8b9p\" (UID: \"27b507e8-a4b3-49cb-bef2-85a319a10257\") " pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-s8b9p" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.446380 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.447433 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-bj7gq" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.460004 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-79558bbfbf-h2gnn"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.460949 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-h2gnn" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.462834 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-85c99d655-6k58x"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.463627 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6k58x" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.466501 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-7nmpc" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.466802 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-9l9lk" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.488014 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.488946 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.491659 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-wjht2" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.512596 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.549420 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-79558bbfbf-h2gnn"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.552325 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9\" (UID: \"73445d4e-349f-4e37-a75d-44949a14db73\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.552364 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq9mv\" (UniqueName: \"kubernetes.io/projected/d09a2660-c1e2-4305-b601-f9fb39b12ed9-kube-api-access-gq9mv\") pod \"placement-operator-controller-manager-57bd55f9b7-t8b9r\" (UID: \"d09a2660-c1e2-4305-b601-f9fb39b12ed9\") " pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.552388 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wnsq\" (UniqueName: \"kubernetes.io/projected/191cd867-8aef-41cd-ae38-18b08d073f5d-kube-api-access-5wnsq\") pod \"ovn-operator-controller-manager-85c99d655-6k58x\" (UID: \"191cd867-8aef-41cd-ae38-18b08d073f5d\") " pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6k58x" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.552412 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htbrr\" (UniqueName: \"kubernetes.io/projected/73445d4e-349f-4e37-a75d-44949a14db73-kube-api-access-htbrr\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9\" (UID: \"73445d4e-349f-4e37-a75d-44949a14db73\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.552443 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x9gd\" (UniqueName: \"kubernetes.io/projected/9340dde2-09ac-43c0-ab0e-b2ce8ed53de0-kube-api-access-4x9gd\") pod \"swift-operator-controller-manager-79558bbfbf-h2gnn\" (UID: \"9340dde2-09ac-43c0-ab0e-b2ce8ed53de0\") " pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-h2gnn" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.552473 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s864\" (UniqueName: \"kubernetes.io/projected/d0fc20a0-4c08-4552-be44-459c503d50c3-kube-api-access-8s864\") pod \"octavia-operator-controller-manager-745bbbd77b-jpncb\" (UID: \"d0fc20a0-4c08-4552-be44-459c503d50c3\") " pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-jpncb" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.552502 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkvb9\" (UniqueName: \"kubernetes.io/projected/9647c082-6b36-4f38-b1fb-663f095997e9-kube-api-access-lkvb9\") pod \"keystone-operator-controller-manager-6c78d668d5-m86tn\" (UID: \"9647c082-6b36-4f38-b1fb-663f095997e9\") " pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-m86tn" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.552566 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvp48\" (UniqueName: \"kubernetes.io/projected/4eb6df58-4273-41ac-8d6d-34d04a30adef-kube-api-access-qvp48\") pod \"neutron-operator-controller-manager-54967dbbdf-4hrhw\" (UID: \"4eb6df58-4273-41ac-8d6d-34d04a30adef\") " pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-4hrhw" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.552584 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94d6r\" (UniqueName: \"kubernetes.io/projected/4f1057b4-de48-4123-986c-795f9957899a-kube-api-access-94d6r\") pod \"nova-operator-controller-manager-5ddd85db87-wt2rd\" (UID: \"4f1057b4-de48-4123-986c-795f9957899a\") " pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wt2rd" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.552612 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvz89\" (UniqueName: \"kubernetes.io/projected/27b507e8-a4b3-49cb-bef2-85a319a10257-kube-api-access-nvz89\") pod \"mariadb-operator-controller-manager-66997756f6-s8b9p\" (UID: \"27b507e8-a4b3-49cb-bef2-85a319a10257\") " pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-s8b9p" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.562133 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-85c99d655-6k58x"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.570822 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.572001 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvp48\" (UniqueName: \"kubernetes.io/projected/4eb6df58-4273-41ac-8d6d-34d04a30adef-kube-api-access-qvp48\") pod \"neutron-operator-controller-manager-54967dbbdf-4hrhw\" (UID: \"4eb6df58-4273-41ac-8d6d-34d04a30adef\") " pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-4hrhw" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.577460 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-4m7xr" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.578646 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94d6r\" (UniqueName: \"kubernetes.io/projected/4f1057b4-de48-4123-986c-795f9957899a-kube-api-access-94d6r\") pod \"nova-operator-controller-manager-5ddd85db87-wt2rd\" (UID: \"4f1057b4-de48-4123-986c-795f9957899a\") " pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wt2rd" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.582247 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-brmw7" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.586537 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkvb9\" (UniqueName: \"kubernetes.io/projected/9647c082-6b36-4f38-b1fb-663f095997e9-kube-api-access-lkvb9\") pod \"keystone-operator-controller-manager-6c78d668d5-m86tn\" (UID: \"9647c082-6b36-4f38-b1fb-663f095997e9\") " pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-m86tn" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.587772 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvz89\" (UniqueName: \"kubernetes.io/projected/27b507e8-a4b3-49cb-bef2-85a319a10257-kube-api-access-nvz89\") pod \"mariadb-operator-controller-manager-66997756f6-s8b9p\" (UID: \"27b507e8-a4b3-49cb-bef2-85a319a10257\") " pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-s8b9p" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.589209 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-56dc67d744-rw4dl"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.599707 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-rw4dl" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.601345 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-56dc67d744-rw4dl"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.603461 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-8lm5t" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.610304 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-ts8dz"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.611228 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-ts8dz" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.623792 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-449jw" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.632101 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-m86tn" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.640778 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7fcbb7ddf5-xlhwm"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.641649 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7fcbb7ddf5-xlhwm" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.646959 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-ghj4x" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.653761 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9\" (UID: \"73445d4e-349f-4e37-a75d-44949a14db73\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.653805 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49ftt\" (UniqueName: \"kubernetes.io/projected/6565f17b-d11e-4f28-bc32-f6e43062f81b-kube-api-access-49ftt\") pod \"test-operator-controller-manager-8467ccb4c8-ts8dz\" (UID: \"6565f17b-d11e-4f28-bc32-f6e43062f81b\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-ts8dz" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.653835 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq9mv\" (UniqueName: \"kubernetes.io/projected/d09a2660-c1e2-4305-b601-f9fb39b12ed9-kube-api-access-gq9mv\") pod \"placement-operator-controller-manager-57bd55f9b7-t8b9r\" (UID: \"d09a2660-c1e2-4305-b601-f9fb39b12ed9\") " pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.653861 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wnsq\" (UniqueName: \"kubernetes.io/projected/191cd867-8aef-41cd-ae38-18b08d073f5d-kube-api-access-5wnsq\") pod \"ovn-operator-controller-manager-85c99d655-6k58x\" (UID: \"191cd867-8aef-41cd-ae38-18b08d073f5d\") " pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6k58x" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.653883 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htbrr\" (UniqueName: \"kubernetes.io/projected/73445d4e-349f-4e37-a75d-44949a14db73-kube-api-access-htbrr\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9\" (UID: \"73445d4e-349f-4e37-a75d-44949a14db73\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.653916 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x9gd\" (UniqueName: \"kubernetes.io/projected/9340dde2-09ac-43c0-ab0e-b2ce8ed53de0-kube-api-access-4x9gd\") pod \"swift-operator-controller-manager-79558bbfbf-h2gnn\" (UID: \"9340dde2-09ac-43c0-ab0e-b2ce8ed53de0\") " pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-h2gnn" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.653936 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s864\" (UniqueName: \"kubernetes.io/projected/d0fc20a0-4c08-4552-be44-459c503d50c3-kube-api-access-8s864\") pod \"octavia-operator-controller-manager-745bbbd77b-jpncb\" (UID: \"d0fc20a0-4c08-4552-be44-459c503d50c3\") " pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-jpncb" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.653988 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75kcj\" (UniqueName: \"kubernetes.io/projected/9f1309cd-f84d-48a6-a8bc-fd4f70307c12-kube-api-access-75kcj\") pod \"watcher-operator-controller-manager-7fcbb7ddf5-xlhwm\" (UID: \"9f1309cd-f84d-48a6-a8bc-fd4f70307c12\") " pod="openstack-operators/watcher-operator-controller-manager-7fcbb7ddf5-xlhwm" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.654017 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8dcb\" (UniqueName: \"kubernetes.io/projected/52b91f42-32e6-4e15-887f-56098da3900b-kube-api-access-z8dcb\") pod \"telemetry-operator-controller-manager-56dc67d744-rw4dl\" (UID: \"52b91f42-32e6-4e15-887f-56098da3900b\") " pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-rw4dl" Feb 18 19:50:07 crc kubenswrapper[4932]: E0218 19:50:07.654123 4932 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:50:07 crc kubenswrapper[4932]: E0218 19:50:07.654157 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert podName:73445d4e-349f-4e37-a75d-44949a14db73 nodeName:}" failed. No retries permitted until 2026-02-18 19:50:08.154144811 +0000 UTC m=+971.736099656 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" (UID: "73445d4e-349f-4e37-a75d-44949a14db73") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.674410 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-4hrhw" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.683750 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htbrr\" (UniqueName: \"kubernetes.io/projected/73445d4e-349f-4e37-a75d-44949a14db73-kube-api-access-htbrr\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9\" (UID: \"73445d4e-349f-4e37-a75d-44949a14db73\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.687782 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wnsq\" (UniqueName: \"kubernetes.io/projected/191cd867-8aef-41cd-ae38-18b08d073f5d-kube-api-access-5wnsq\") pod \"ovn-operator-controller-manager-85c99d655-6k58x\" (UID: \"191cd867-8aef-41cd-ae38-18b08d073f5d\") " pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6k58x" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.691484 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7fcbb7ddf5-xlhwm"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.698658 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-ts8dz"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.699219 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq9mv\" (UniqueName: \"kubernetes.io/projected/d09a2660-c1e2-4305-b601-f9fb39b12ed9-kube-api-access-gq9mv\") pod \"placement-operator-controller-manager-57bd55f9b7-t8b9r\" (UID: \"d09a2660-c1e2-4305-b601-f9fb39b12ed9\") " pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.699877 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x9gd\" (UniqueName: \"kubernetes.io/projected/9340dde2-09ac-43c0-ab0e-b2ce8ed53de0-kube-api-access-4x9gd\") pod \"swift-operator-controller-manager-79558bbfbf-h2gnn\" (UID: \"9340dde2-09ac-43c0-ab0e-b2ce8ed53de0\") " pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-h2gnn" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.701649 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s864\" (UniqueName: \"kubernetes.io/projected/d0fc20a0-4c08-4552-be44-459c503d50c3-kube-api-access-8s864\") pod \"octavia-operator-controller-manager-745bbbd77b-jpncb\" (UID: \"d0fc20a0-4c08-4552-be44-459c503d50c3\") " pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-jpncb" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.731620 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wt2rd" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.732355 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-s8b9p" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.733060 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-jpncb" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.756716 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert\") pod \"infra-operator-controller-manager-66d6b5f488-dv4j7\" (UID: \"376e77a5-0e6f-4999-a037-96154984442f\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.758181 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75kcj\" (UniqueName: \"kubernetes.io/projected/9f1309cd-f84d-48a6-a8bc-fd4f70307c12-kube-api-access-75kcj\") pod \"watcher-operator-controller-manager-7fcbb7ddf5-xlhwm\" (UID: \"9f1309cd-f84d-48a6-a8bc-fd4f70307c12\") " pod="openstack-operators/watcher-operator-controller-manager-7fcbb7ddf5-xlhwm" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.758226 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8dcb\" (UniqueName: \"kubernetes.io/projected/52b91f42-32e6-4e15-887f-56098da3900b-kube-api-access-z8dcb\") pod \"telemetry-operator-controller-manager-56dc67d744-rw4dl\" (UID: \"52b91f42-32e6-4e15-887f-56098da3900b\") " pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-rw4dl" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.758271 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49ftt\" (UniqueName: \"kubernetes.io/projected/6565f17b-d11e-4f28-bc32-f6e43062f81b-kube-api-access-49ftt\") pod \"test-operator-controller-manager-8467ccb4c8-ts8dz\" (UID: \"6565f17b-d11e-4f28-bc32-f6e43062f81b\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-ts8dz" Feb 18 19:50:07 crc kubenswrapper[4932]: E0218 19:50:07.758708 4932 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:50:07 crc kubenswrapper[4932]: E0218 19:50:07.759501 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert podName:376e77a5-0e6f-4999-a037-96154984442f nodeName:}" failed. No retries permitted until 2026-02-18 19:50:08.759272132 +0000 UTC m=+972.341226987 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert") pod "infra-operator-controller-manager-66d6b5f488-dv4j7" (UID: "376e77a5-0e6f-4999-a037-96154984442f") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.781596 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8dcb\" (UniqueName: \"kubernetes.io/projected/52b91f42-32e6-4e15-887f-56098da3900b-kube-api-access-z8dcb\") pod \"telemetry-operator-controller-manager-56dc67d744-rw4dl\" (UID: \"52b91f42-32e6-4e15-887f-56098da3900b\") " pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-rw4dl" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.782019 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75kcj\" (UniqueName: \"kubernetes.io/projected/9f1309cd-f84d-48a6-a8bc-fd4f70307c12-kube-api-access-75kcj\") pod \"watcher-operator-controller-manager-7fcbb7ddf5-xlhwm\" (UID: \"9f1309cd-f84d-48a6-a8bc-fd4f70307c12\") " pod="openstack-operators/watcher-operator-controller-manager-7fcbb7ddf5-xlhwm" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.788554 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49ftt\" (UniqueName: \"kubernetes.io/projected/6565f17b-d11e-4f28-bc32-f6e43062f81b-kube-api-access-49ftt\") pod \"test-operator-controller-manager-8467ccb4c8-ts8dz\" (UID: \"6565f17b-d11e-4f28-bc32-f6e43062f81b\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-ts8dz" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.799791 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-h2gnn" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.806838 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.808147 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.810266 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.812187 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.813167 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-kr2tb" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.813342 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.825115 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6k58x" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.836144 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d5brc"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.840871 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.842501 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d5brc" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.855680 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-zslpg" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.872096 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d5brc"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.903026 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-c4b7d6946-clwts"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.917555 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68c6d499cb-b46xh"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.941252 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-rw4dl" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.947935 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-ts8dz" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.956038 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.961838 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.962031 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.962070 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpmpb\" (UniqueName: \"kubernetes.io/projected/7d117b07-cdb8-4d98-bd18-87d6511259af-kube-api-access-zpmpb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-d5brc\" (UID: \"7d117b07-cdb8-4d98-bd18-87d6511259af\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d5brc" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.962121 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvjj4\" (UniqueName: \"kubernetes.io/projected/6545794f-bb0e-4cb6-848b-436201e3af4f-kube-api-access-bvjj4\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.967432 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7fcbb7ddf5-xlhwm" Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.969805 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57746b5ff9-56fbf"] Feb 18 19:50:07 crc kubenswrapper[4932]: I0218 19:50:07.979824 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-55cc45767f-mp2bb"] Feb 18 19:50:08 crc kubenswrapper[4932]: W0218 19:50:08.048654 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f286c1e_d207_47a0_86be_6711856071a7.slice/crio-507836a98e58d8390eceb759c7e3f4c0a437dc3549a4a0456b2b501ef58aed22 WatchSource:0}: Error finding container 507836a98e58d8390eceb759c7e3f4c0a437dc3549a4a0456b2b501ef58aed22: Status 404 returned error can't find the container with id 507836a98e58d8390eceb759c7e3f4c0a437dc3549a4a0456b2b501ef58aed22 Feb 18 19:50:08 crc kubenswrapper[4932]: W0218 19:50:08.050476 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33f4dcd6_0eea_40f3_9968_458594d82013.slice/crio-e7ea7aa53bb3d84ac557d012eb1963c6b521ba0fec1e0f6076fd51e578b2e2a6 WatchSource:0}: Error finding container e7ea7aa53bb3d84ac557d012eb1963c6b521ba0fec1e0f6076fd51e578b2e2a6: Status 404 returned error can't find the container with id e7ea7aa53bb3d84ac557d012eb1963c6b521ba0fec1e0f6076fd51e578b2e2a6 Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.063236 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.063328 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.063352 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpmpb\" (UniqueName: \"kubernetes.io/projected/7d117b07-cdb8-4d98-bd18-87d6511259af-kube-api-access-zpmpb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-d5brc\" (UID: \"7d117b07-cdb8-4d98-bd18-87d6511259af\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d5brc" Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.063374 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvjj4\" (UniqueName: \"kubernetes.io/projected/6545794f-bb0e-4cb6-848b-436201e3af4f-kube-api-access-bvjj4\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.063592 4932 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.063688 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs podName:6545794f-bb0e-4cb6-848b-436201e3af4f nodeName:}" failed. No retries permitted until 2026-02-18 19:50:08.563664281 +0000 UTC m=+972.145619126 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs") pod "openstack-operator-controller-manager-5ffbcbf949-cc86z" (UID: "6545794f-bb0e-4cb6-848b-436201e3af4f") : secret "webhook-server-cert" not found Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.063729 4932 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.063771 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs podName:6545794f-bb0e-4cb6-848b-436201e3af4f nodeName:}" failed. No retries permitted until 2026-02-18 19:50:08.563756583 +0000 UTC m=+972.145711428 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs") pod "openstack-operator-controller-manager-5ffbcbf949-cc86z" (UID: "6545794f-bb0e-4cb6-848b-436201e3af4f") : secret "metrics-server-cert" not found Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.091062 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvjj4\" (UniqueName: \"kubernetes.io/projected/6545794f-bb0e-4cb6-848b-436201e3af4f-kube-api-access-bvjj4\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.092621 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpmpb\" (UniqueName: \"kubernetes.io/projected/7d117b07-cdb8-4d98-bd18-87d6511259af-kube-api-access-zpmpb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-d5brc\" (UID: \"7d117b07-cdb8-4d98-bd18-87d6511259af\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d5brc" Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.108824 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-b46xh" event={"ID":"4f286c1e-d207-47a0-86be-6711856071a7","Type":"ContainerStarted","Data":"507836a98e58d8390eceb759c7e3f4c0a437dc3549a4a0456b2b501ef58aed22"} Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.117542 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-mp2bb" event={"ID":"33f4dcd6-0eea-40f3-9968-458594d82013","Type":"ContainerStarted","Data":"e7ea7aa53bb3d84ac557d012eb1963c6b521ba0fec1e0f6076fd51e578b2e2a6"} Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.134322 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-clwts" event={"ID":"d03b5e78-a45c-49aa-8915-336be03c8c94","Type":"ContainerStarted","Data":"468dedad4dff3ccab4a0b7d04e45ed8000b19fc6f8d83ce6f9c4a3d105e4cd26"} Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.139156 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-9595d6797-7ssxs"] Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.165100 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9\" (UID: \"73445d4e-349f-4e37-a75d-44949a14db73\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.165421 4932 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.165473 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert podName:73445d4e-349f-4e37-a75d-44949a14db73 nodeName:}" failed. No retries permitted until 2026-02-18 19:50:09.165457568 +0000 UTC m=+972.747412413 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" (UID: "73445d4e-349f-4e37-a75d-44949a14db73") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.168795 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6494cdbf8f-qqxpn"] Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.191047 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d5brc" Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.193497 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-54fb488b88-4m7xr"] Feb 18 19:50:08 crc kubenswrapper[4932]: W0218 19:50:08.195135 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0fe77a1_c4a7_422f_b7c2_3062c2af1393.slice/crio-9ddbcd6cc848c003e8a7979ab620537eb2cc973bb24a978cc08edae9dcad0b2c WatchSource:0}: Error finding container 9ddbcd6cc848c003e8a7979ab620537eb2cc973bb24a978cc08edae9dcad0b2c: Status 404 returned error can't find the container with id 9ddbcd6cc848c003e8a7979ab620537eb2cc973bb24a978cc08edae9dcad0b2c Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.385882 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c78d668d5-m86tn"] Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.412765 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-96fff9cb8-brmw7"] Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.418136 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54967dbbdf-4hrhw"] Feb 18 19:50:08 crc kubenswrapper[4932]: W0218 19:50:08.465198 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffff0e6b_64e2_499f_8296_f374c5d62450.slice/crio-d0f897ba4525ef6554658953a828194254e35dd379efe8bbd5cbf6b47e4dd555 WatchSource:0}: Error finding container d0f897ba4525ef6554658953a828194254e35dd379efe8bbd5cbf6b47e4dd555: Status 404 returned error can't find the container with id d0f897ba4525ef6554658953a828194254e35dd379efe8bbd5cbf6b47e4dd555 Feb 18 19:50:08 crc kubenswrapper[4932]: W0218 19:50:08.478374 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4eb6df58_4273_41ac_8d6d_34d04a30adef.slice/crio-f660d40407f3cc6187fcbcfef12f703bc9f4c6eac4a6b53a951c2ed860f0384f WatchSource:0}: Error finding container f660d40407f3cc6187fcbcfef12f703bc9f4c6eac4a6b53a951c2ed860f0384f: Status 404 returned error can't find the container with id f660d40407f3cc6187fcbcfef12f703bc9f4c6eac4a6b53a951c2ed860f0384f Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.512251 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-745bbbd77b-jpncb"] Feb 18 19:50:08 crc kubenswrapper[4932]: W0218 19:50:08.524150 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0fc20a0_4c08_4552_be44_459c503d50c3.slice/crio-9d78d41a8c40277056de600c1b54d3a25321de416c4d536bbdcf205264742024 WatchSource:0}: Error finding container 9d78d41a8c40277056de600c1b54d3a25321de416c4d536bbdcf205264742024: Status 404 returned error can't find the container with id 9d78d41a8c40277056de600c1b54d3a25321de416c4d536bbdcf205264742024 Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.575450 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.575576 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.575708 4932 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.575753 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs podName:6545794f-bb0e-4cb6-848b-436201e3af4f nodeName:}" failed. No retries permitted until 2026-02-18 19:50:09.575739695 +0000 UTC m=+973.157694540 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs") pod "openstack-operator-controller-manager-5ffbcbf949-cc86z" (UID: "6545794f-bb0e-4cb6-848b-436201e3af4f") : secret "webhook-server-cert" not found Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.576107 4932 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.576133 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs podName:6545794f-bb0e-4cb6-848b-436201e3af4f nodeName:}" failed. No retries permitted until 2026-02-18 19:50:09.576126015 +0000 UTC m=+973.158080860 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs") pod "openstack-operator-controller-manager-5ffbcbf949-cc86z" (UID: "6545794f-bb0e-4cb6-848b-436201e3af4f") : secret "metrics-server-cert" not found Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.626291 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-85c99d655-6k58x"] Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.632115 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66997756f6-s8b9p"] Feb 18 19:50:08 crc kubenswrapper[4932]: W0218 19:50:08.634143 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod191cd867_8aef_41cd_ae38_18b08d073f5d.slice/crio-8762ad162e0cd8666f12a4479ad9fd70ebb09e6167052664e041e0414a993c99 WatchSource:0}: Error finding container 8762ad162e0cd8666f12a4479ad9fd70ebb09e6167052664e041e0414a993c99: Status 404 returned error can't find the container with id 8762ad162e0cd8666f12a4479ad9fd70ebb09e6167052664e041e0414a993c99 Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.637287 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5ddd85db87-wt2rd"] Feb 18 19:50:08 crc kubenswrapper[4932]: W0218 19:50:08.637755 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27b507e8_a4b3_49cb_bef2_85a319a10257.slice/crio-9584f838a6e0fe8662f839c4b306fa2375447aa86a8667d7a9c4b37e9a27dda2 WatchSource:0}: Error finding container 9584f838a6e0fe8662f839c4b306fa2375447aa86a8667d7a9c4b37e9a27dda2: Status 404 returned error can't find the container with id 9584f838a6e0fe8662f839c4b306fa2375447aa86a8667d7a9c4b37e9a27dda2 Feb 18 19:50:08 crc kubenswrapper[4932]: W0218 19:50:08.638594 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f1057b4_de48_4123_986c_795f9957899a.slice/crio-4b157105b0eeb85afad3cd2abf54e85dc2ba3273951e498dab089349eb453ae1 WatchSource:0}: Error finding container 4b157105b0eeb85afad3cd2abf54e85dc2ba3273951e498dab089349eb453ae1: Status 404 returned error can't find the container with id 4b157105b0eeb85afad3cd2abf54e85dc2ba3273951e498dab089349eb453ae1 Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.640550 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:ab8e8207abec9cf5da7afded75ea76d1c3d2b9ab0f8e3124f518651e38f3123c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-94d6r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-5ddd85db87-wt2rd_openstack-operators(4f1057b4-de48-4123-986c-795f9957899a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.642437 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wt2rd" podUID="4f1057b4-de48-4123-986c-795f9957899a" Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.757122 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r"] Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.778354 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7fcbb7ddf5-xlhwm"] Feb 18 19:50:08 crc kubenswrapper[4932]: W0218 19:50:08.782059 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd09a2660_c1e2_4305_b601_f9fb39b12ed9.slice/crio-7ab443a13b88d5c12a98d95e72ef7a785689a4589c00962c7735e57f05697d32 WatchSource:0}: Error finding container 7ab443a13b88d5c12a98d95e72ef7a785689a4589c00962c7735e57f05697d32: Status 404 returned error can't find the container with id 7ab443a13b88d5c12a98d95e72ef7a785689a4589c00962c7735e57f05697d32 Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.785016 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-79558bbfbf-h2gnn"] Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.792031 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-56dc67d744-rw4dl"] Feb 18 19:50:08 crc kubenswrapper[4932]: W0218 19:50:08.793670 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f1309cd_f84d_48a6_a8bc_fd4f70307c12.slice/crio-1cb0d20898a8806332d87aa680db1617c80ceb32b03316ee072c92a5b5da0504 WatchSource:0}: Error finding container 1cb0d20898a8806332d87aa680db1617c80ceb32b03316ee072c92a5b5da0504: Status 404 returned error can't find the container with id 1cb0d20898a8806332d87aa680db1617c80ceb32b03316ee072c92a5b5da0504 Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.798473 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d800f1288d1517d84a45ddd475c3c0b4e8686fd900c9edf1e20b662b15218b89,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gq9mv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-57bd55f9b7-t8b9r_openstack-operators(d09a2660-c1e2-4305-b601-f9fb39b12ed9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.799673 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r" podUID="d09a2660-c1e2-4305-b601-f9fb39b12ed9" Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.810031 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert\") pod \"infra-operator-controller-manager-66d6b5f488-dv4j7\" (UID: \"376e77a5-0e6f-4999-a037-96154984442f\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.810431 4932 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.810499 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert podName:376e77a5-0e6f-4999-a037-96154984442f nodeName:}" failed. No retries permitted until 2026-02-18 19:50:10.810464618 +0000 UTC m=+974.392419463 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert") pod "infra-operator-controller-manager-66d6b5f488-dv4j7" (UID: "376e77a5-0e6f-4999-a037-96154984442f") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.815438 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.58:5001/openstack-k8s-operators/watcher-operator:bccc5f477aecf1b112841224406211ceeff240ba,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-75kcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-7fcbb7ddf5-xlhwm_openstack-operators(9f1309cd-f84d-48a6-a8bc-fd4f70307c12): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.815548 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:4b10e23983c3ec518c35aeabb33ac228063e56c81b4d7a100c5d91139ad7d7fc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z8dcb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-56dc67d744-rw4dl_openstack-operators(52b91f42-32e6-4e15-887f-56098da3900b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.818602 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-rw4dl" podUID="52b91f42-32e6-4e15-887f-56098da3900b" Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.818847 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-7fcbb7ddf5-xlhwm" podUID="9f1309cd-f84d-48a6-a8bc-fd4f70307c12" Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.865643 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-ts8dz"] Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.880381 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f9b2e00617c7f219932ea0d5e2bb795cc4361a335a72743077948d8108695c27,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-49ftt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-8467ccb4c8-ts8dz_openstack-operators(6565f17b-d11e-4f28-bc32-f6e43062f81b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.881976 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-ts8dz" podUID="6565f17b-d11e-4f28-bc32-f6e43062f81b" Feb 18 19:50:08 crc kubenswrapper[4932]: I0218 19:50:08.897806 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d5brc"] Feb 18 19:50:08 crc kubenswrapper[4932]: W0218 19:50:08.908578 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d117b07_cdb8_4d98_bd18_87d6511259af.slice/crio-73fe9d48f61803e493d76bc4de11b80d31b3efbeb15af691a97d62faa17e43c9 WatchSource:0}: Error finding container 73fe9d48f61803e493d76bc4de11b80d31b3efbeb15af691a97d62faa17e43c9: Status 404 returned error can't find the container with id 73fe9d48f61803e493d76bc4de11b80d31b3efbeb15af691a97d62faa17e43c9 Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.916324 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zpmpb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-d5brc_openstack-operators(7d117b07-cdb8-4d98-bd18-87d6511259af): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 19:50:08 crc kubenswrapper[4932]: E0218 19:50:08.917971 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d5brc" podUID="7d117b07-cdb8-4d98-bd18-87d6511259af" Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.156058 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-9595d6797-7ssxs" event={"ID":"a0fe77a1-c4a7-422f-b7c2-3062c2af1393","Type":"ContainerStarted","Data":"9ddbcd6cc848c003e8a7979ab620537eb2cc973bb24a978cc08edae9dcad0b2c"} Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.157652 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wt2rd" event={"ID":"4f1057b4-de48-4123-986c-795f9957899a","Type":"ContainerStarted","Data":"4b157105b0eeb85afad3cd2abf54e85dc2ba3273951e498dab089349eb453ae1"} Feb 18 19:50:09 crc kubenswrapper[4932]: E0218 19:50:09.159575 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:ab8e8207abec9cf5da7afded75ea76d1c3d2b9ab0f8e3124f518651e38f3123c\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wt2rd" podUID="4f1057b4-de48-4123-986c-795f9957899a" Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.189937 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-brmw7" event={"ID":"ffff0e6b-64e2-499f-8296-f374c5d62450","Type":"ContainerStarted","Data":"d0f897ba4525ef6554658953a828194254e35dd379efe8bbd5cbf6b47e4dd555"} Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.189987 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6k58x" event={"ID":"191cd867-8aef-41cd-ae38-18b08d073f5d","Type":"ContainerStarted","Data":"8762ad162e0cd8666f12a4479ad9fd70ebb09e6167052664e041e0414a993c99"} Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.190001 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-h2gnn" event={"ID":"9340dde2-09ac-43c0-ab0e-b2ce8ed53de0","Type":"ContainerStarted","Data":"f6252645b2b7572ffbe84333faa5c64fbaf250f9eb400dbe3894cf40398f3ff1"} Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.190010 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-m86tn" event={"ID":"9647c082-6b36-4f38-b1fb-663f095997e9","Type":"ContainerStarted","Data":"c54637c0d17b95a210c8a23473047e2eb4d6f68a84916016223ed49d63d7fe85"} Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.190021 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-4hrhw" event={"ID":"4eb6df58-4273-41ac-8d6d-34d04a30adef","Type":"ContainerStarted","Data":"f660d40407f3cc6187fcbcfef12f703bc9f4c6eac4a6b53a951c2ed860f0384f"} Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.190456 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-jpncb" event={"ID":"d0fc20a0-4c08-4552-be44-459c503d50c3","Type":"ContainerStarted","Data":"9d78d41a8c40277056de600c1b54d3a25321de416c4d536bbdcf205264742024"} Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.191589 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-4m7xr" event={"ID":"3967efad-3234-435e-b755-f684ffd74918","Type":"ContainerStarted","Data":"909acaeac0150e726816e54fdf2e638be37c3f5afd2973732684bd269fafa781"} Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.192865 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d5brc" event={"ID":"7d117b07-cdb8-4d98-bd18-87d6511259af","Type":"ContainerStarted","Data":"73fe9d48f61803e493d76bc4de11b80d31b3efbeb15af691a97d62faa17e43c9"} Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.194219 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-ts8dz" event={"ID":"6565f17b-d11e-4f28-bc32-f6e43062f81b","Type":"ContainerStarted","Data":"04f7dba8bd493f97fa2230d32e7a1a86b4b3bf952c7f245f066e7b375129f626"} Feb 18 19:50:09 crc kubenswrapper[4932]: E0218 19:50:09.195429 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d5brc" podUID="7d117b07-cdb8-4d98-bd18-87d6511259af" Feb 18 19:50:09 crc kubenswrapper[4932]: E0218 19:50:09.196428 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f9b2e00617c7f219932ea0d5e2bb795cc4361a335a72743077948d8108695c27\\\"\"" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-ts8dz" podUID="6565f17b-d11e-4f28-bc32-f6e43062f81b" Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.205389 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56fbf" event={"ID":"59af7bc1-7774-4102-ae6c-2d7f820d3b93","Type":"ContainerStarted","Data":"b4a6cd93bfed5ed21418b5c1830c171b8dbb74c315729388d5073102959eba17"} Feb 18 19:50:09 crc kubenswrapper[4932]: E0218 19:50:09.210962 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.58:5001/openstack-k8s-operators/watcher-operator:bccc5f477aecf1b112841224406211ceeff240ba\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7fcbb7ddf5-xlhwm" podUID="9f1309cd-f84d-48a6-a8bc-fd4f70307c12" Feb 18 19:50:09 crc kubenswrapper[4932]: E0218 19:50:09.212775 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:4b10e23983c3ec518c35aeabb33ac228063e56c81b4d7a100c5d91139ad7d7fc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-rw4dl" podUID="52b91f42-32e6-4e15-887f-56098da3900b" Feb 18 19:50:09 crc kubenswrapper[4932]: E0218 19:50:09.213784 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d800f1288d1517d84a45ddd475c3c0b4e8686fd900c9edf1e20b662b15218b89\\\"\"" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r" podUID="d09a2660-c1e2-4305-b601-f9fb39b12ed9" Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.207894 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-qqxpn" event={"ID":"6b690eeb-2e37-49d8-9f44-9ca086aa2f00","Type":"ContainerStarted","Data":"b642512460248bc2b08067e00e663ae932b9fbf0fbf6c1d3cc0135252757086e"} Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.214106 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7fcbb7ddf5-xlhwm" event={"ID":"9f1309cd-f84d-48a6-a8bc-fd4f70307c12","Type":"ContainerStarted","Data":"1cb0d20898a8806332d87aa680db1617c80ceb32b03316ee072c92a5b5da0504"} Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.214145 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-rw4dl" event={"ID":"52b91f42-32e6-4e15-887f-56098da3900b","Type":"ContainerStarted","Data":"89b79ca88a564057e8c16954b8b9ef51642964daa2277fe2a3be9f44ec459a37"} Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.214162 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r" event={"ID":"d09a2660-c1e2-4305-b601-f9fb39b12ed9","Type":"ContainerStarted","Data":"7ab443a13b88d5c12a98d95e72ef7a785689a4589c00962c7735e57f05697d32"} Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.214226 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-s8b9p" event={"ID":"27b507e8-a4b3-49cb-bef2-85a319a10257","Type":"ContainerStarted","Data":"9584f838a6e0fe8662f839c4b306fa2375447aa86a8667d7a9c4b37e9a27dda2"} Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.216405 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9\" (UID: \"73445d4e-349f-4e37-a75d-44949a14db73\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" Feb 18 19:50:09 crc kubenswrapper[4932]: E0218 19:50:09.218697 4932 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:50:09 crc kubenswrapper[4932]: E0218 19:50:09.218744 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert podName:73445d4e-349f-4e37-a75d-44949a14db73 nodeName:}" failed. No retries permitted until 2026-02-18 19:50:11.218730845 +0000 UTC m=+974.800685690 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" (UID: "73445d4e-349f-4e37-a75d-44949a14db73") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.624800 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:09 crc kubenswrapper[4932]: I0218 19:50:09.624935 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:09 crc kubenswrapper[4932]: E0218 19:50:09.625019 4932 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 19:50:09 crc kubenswrapper[4932]: E0218 19:50:09.625138 4932 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 19:50:09 crc kubenswrapper[4932]: E0218 19:50:09.625155 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs podName:6545794f-bb0e-4cb6-848b-436201e3af4f nodeName:}" failed. No retries permitted until 2026-02-18 19:50:11.625106416 +0000 UTC m=+975.207061331 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs") pod "openstack-operator-controller-manager-5ffbcbf949-cc86z" (UID: "6545794f-bb0e-4cb6-848b-436201e3af4f") : secret "webhook-server-cert" not found Feb 18 19:50:09 crc kubenswrapper[4932]: E0218 19:50:09.625285 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs podName:6545794f-bb0e-4cb6-848b-436201e3af4f nodeName:}" failed. No retries permitted until 2026-02-18 19:50:11.62526742 +0000 UTC m=+975.207222265 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs") pod "openstack-operator-controller-manager-5ffbcbf949-cc86z" (UID: "6545794f-bb0e-4cb6-848b-436201e3af4f") : secret "metrics-server-cert" not found Feb 18 19:50:10 crc kubenswrapper[4932]: E0218 19:50:10.257330 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d800f1288d1517d84a45ddd475c3c0b4e8686fd900c9edf1e20b662b15218b89\\\"\"" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r" podUID="d09a2660-c1e2-4305-b601-f9fb39b12ed9" Feb 18 19:50:10 crc kubenswrapper[4932]: E0218 19:50:10.258898 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f9b2e00617c7f219932ea0d5e2bb795cc4361a335a72743077948d8108695c27\\\"\"" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-ts8dz" podUID="6565f17b-d11e-4f28-bc32-f6e43062f81b" Feb 18 19:50:10 crc kubenswrapper[4932]: E0218 19:50:10.261209 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:ab8e8207abec9cf5da7afded75ea76d1c3d2b9ab0f8e3124f518651e38f3123c\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wt2rd" podUID="4f1057b4-de48-4123-986c-795f9957899a" Feb 18 19:50:10 crc kubenswrapper[4932]: E0218 19:50:10.261254 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:4b10e23983c3ec518c35aeabb33ac228063e56c81b4d7a100c5d91139ad7d7fc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-rw4dl" podUID="52b91f42-32e6-4e15-887f-56098da3900b" Feb 18 19:50:10 crc kubenswrapper[4932]: E0218 19:50:10.261280 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.58:5001/openstack-k8s-operators/watcher-operator:bccc5f477aecf1b112841224406211ceeff240ba\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7fcbb7ddf5-xlhwm" podUID="9f1309cd-f84d-48a6-a8bc-fd4f70307c12" Feb 18 19:50:10 crc kubenswrapper[4932]: E0218 19:50:10.262526 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d5brc" podUID="7d117b07-cdb8-4d98-bd18-87d6511259af" Feb 18 19:50:10 crc kubenswrapper[4932]: I0218 19:50:10.847419 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert\") pod \"infra-operator-controller-manager-66d6b5f488-dv4j7\" (UID: \"376e77a5-0e6f-4999-a037-96154984442f\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" Feb 18 19:50:10 crc kubenswrapper[4932]: E0218 19:50:10.847591 4932 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:50:10 crc kubenswrapper[4932]: E0218 19:50:10.847963 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert podName:376e77a5-0e6f-4999-a037-96154984442f nodeName:}" failed. No retries permitted until 2026-02-18 19:50:14.847943639 +0000 UTC m=+978.429898484 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert") pod "infra-operator-controller-manager-66d6b5f488-dv4j7" (UID: "376e77a5-0e6f-4999-a037-96154984442f") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:50:11 crc kubenswrapper[4932]: I0218 19:50:11.253381 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9\" (UID: \"73445d4e-349f-4e37-a75d-44949a14db73\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" Feb 18 19:50:11 crc kubenswrapper[4932]: E0218 19:50:11.253513 4932 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:50:11 crc kubenswrapper[4932]: E0218 19:50:11.253576 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert podName:73445d4e-349f-4e37-a75d-44949a14db73 nodeName:}" failed. No retries permitted until 2026-02-18 19:50:15.253557781 +0000 UTC m=+978.835512626 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" (UID: "73445d4e-349f-4e37-a75d-44949a14db73") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:50:11 crc kubenswrapper[4932]: I0218 19:50:11.658583 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:11 crc kubenswrapper[4932]: I0218 19:50:11.658753 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:11 crc kubenswrapper[4932]: E0218 19:50:11.658801 4932 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 19:50:11 crc kubenswrapper[4932]: E0218 19:50:11.658892 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs podName:6545794f-bb0e-4cb6-848b-436201e3af4f nodeName:}" failed. No retries permitted until 2026-02-18 19:50:15.658875416 +0000 UTC m=+979.240830261 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs") pod "openstack-operator-controller-manager-5ffbcbf949-cc86z" (UID: "6545794f-bb0e-4cb6-848b-436201e3af4f") : secret "webhook-server-cert" not found Feb 18 19:50:11 crc kubenswrapper[4932]: E0218 19:50:11.658900 4932 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 19:50:11 crc kubenswrapper[4932]: E0218 19:50:11.658958 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs podName:6545794f-bb0e-4cb6-848b-436201e3af4f nodeName:}" failed. No retries permitted until 2026-02-18 19:50:15.658940728 +0000 UTC m=+979.240895593 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs") pod "openstack-operator-controller-manager-5ffbcbf949-cc86z" (UID: "6545794f-bb0e-4cb6-848b-436201e3af4f") : secret "metrics-server-cert" not found Feb 18 19:50:14 crc kubenswrapper[4932]: I0218 19:50:14.918203 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert\") pod \"infra-operator-controller-manager-66d6b5f488-dv4j7\" (UID: \"376e77a5-0e6f-4999-a037-96154984442f\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" Feb 18 19:50:14 crc kubenswrapper[4932]: E0218 19:50:14.918455 4932 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:50:14 crc kubenswrapper[4932]: E0218 19:50:14.918816 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert podName:376e77a5-0e6f-4999-a037-96154984442f nodeName:}" failed. No retries permitted until 2026-02-18 19:50:22.918787681 +0000 UTC m=+986.500742556 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert") pod "infra-operator-controller-manager-66d6b5f488-dv4j7" (UID: "376e77a5-0e6f-4999-a037-96154984442f") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:50:15 crc kubenswrapper[4932]: I0218 19:50:15.331475 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9\" (UID: \"73445d4e-349f-4e37-a75d-44949a14db73\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" Feb 18 19:50:15 crc kubenswrapper[4932]: E0218 19:50:15.331727 4932 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:50:15 crc kubenswrapper[4932]: E0218 19:50:15.331825 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert podName:73445d4e-349f-4e37-a75d-44949a14db73 nodeName:}" failed. No retries permitted until 2026-02-18 19:50:23.331798775 +0000 UTC m=+986.913753660 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" (UID: "73445d4e-349f-4e37-a75d-44949a14db73") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:50:15 crc kubenswrapper[4932]: I0218 19:50:15.737840 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:15 crc kubenswrapper[4932]: E0218 19:50:15.738116 4932 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 19:50:15 crc kubenswrapper[4932]: E0218 19:50:15.738450 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs podName:6545794f-bb0e-4cb6-848b-436201e3af4f nodeName:}" failed. No retries permitted until 2026-02-18 19:50:23.738418392 +0000 UTC m=+987.320373277 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs") pod "openstack-operator-controller-manager-5ffbcbf949-cc86z" (UID: "6545794f-bb0e-4cb6-848b-436201e3af4f") : secret "metrics-server-cert" not found Feb 18 19:50:15 crc kubenswrapper[4932]: I0218 19:50:15.739263 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:15 crc kubenswrapper[4932]: E0218 19:50:15.739482 4932 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 19:50:15 crc kubenswrapper[4932]: E0218 19:50:15.739585 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs podName:6545794f-bb0e-4cb6-848b-436201e3af4f nodeName:}" failed. No retries permitted until 2026-02-18 19:50:23.73956024 +0000 UTC m=+987.321515135 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs") pod "openstack-operator-controller-manager-5ffbcbf949-cc86z" (UID: "6545794f-bb0e-4cb6-848b-436201e3af4f") : secret "webhook-server-cert" not found Feb 18 19:50:20 crc kubenswrapper[4932]: E0218 19:50:20.232386 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:8d65a2becf279bb8b6b1a09e273d9a2cb1ff41f85bc42ef2e4d573cbb8cbac89" Feb 18 19:50:20 crc kubenswrapper[4932]: E0218 19:50:20.232872 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:8d65a2becf279bb8b6b1a09e273d9a2cb1ff41f85bc42ef2e4d573cbb8cbac89,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qvp48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-54967dbbdf-4hrhw_openstack-operators(4eb6df58-4273-41ac-8d6d-34d04a30adef): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:50:20 crc kubenswrapper[4932]: E0218 19:50:20.234113 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-4hrhw" podUID="4eb6df58-4273-41ac-8d6d-34d04a30adef" Feb 18 19:50:20 crc kubenswrapper[4932]: E0218 19:50:20.342558 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:8d65a2becf279bb8b6b1a09e273d9a2cb1ff41f85bc42ef2e4d573cbb8cbac89\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-4hrhw" podUID="4eb6df58-4273-41ac-8d6d-34d04a30adef" Feb 18 19:50:20 crc kubenswrapper[4932]: E0218 19:50:20.781363 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:4d3b6d259005ea30eee9c134d5fdf3d67eaacad8568ed105a34674e510086816" Feb 18 19:50:20 crc kubenswrapper[4932]: E0218 19:50:20.781565 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:4d3b6d259005ea30eee9c134d5fdf3d67eaacad8568ed105a34674e510086816,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5wnsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-85c99d655-6k58x_openstack-operators(191cd867-8aef-41cd-ae38-18b08d073f5d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:50:20 crc kubenswrapper[4932]: E0218 19:50:20.782809 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6k58x" podUID="191cd867-8aef-41cd-ae38-18b08d073f5d" Feb 18 19:50:21 crc kubenswrapper[4932]: E0218 19:50:21.334230 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:a5f362e48eb379fd891a28080673947763f8103f443f08a01d13cd09a3123e4d" Feb 18 19:50:21 crc kubenswrapper[4932]: E0218 19:50:21.334489 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:a5f362e48eb379fd891a28080673947763f8103f443f08a01d13cd09a3123e4d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ktdsl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-57746b5ff9-56fbf_openstack-operators(59af7bc1-7774-4102-ae6c-2d7f820d3b93): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:50:21 crc kubenswrapper[4932]: E0218 19:50:21.335729 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56fbf" podUID="59af7bc1-7774-4102-ae6c-2d7f820d3b93" Feb 18 19:50:21 crc kubenswrapper[4932]: E0218 19:50:21.347576 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:a5f362e48eb379fd891a28080673947763f8103f443f08a01d13cd09a3123e4d\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56fbf" podUID="59af7bc1-7774-4102-ae6c-2d7f820d3b93" Feb 18 19:50:21 crc kubenswrapper[4932]: E0218 19:50:21.348134 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:4d3b6d259005ea30eee9c134d5fdf3d67eaacad8568ed105a34674e510086816\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6k58x" podUID="191cd867-8aef-41cd-ae38-18b08d073f5d" Feb 18 19:50:22 crc kubenswrapper[4932]: E0218 19:50:22.833724 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:9cb0b42ba1836ba4320a0a4660bfdeddea8c0685be379c0000dafb16398f4469" Feb 18 19:50:22 crc kubenswrapper[4932]: E0218 19:50:22.834123 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:9cb0b42ba1836ba4320a0a4660bfdeddea8c0685be379c0000dafb16398f4469,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lkvb9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-6c78d668d5-m86tn_openstack-operators(9647c082-6b36-4f38-b1fb-663f095997e9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:50:22 crc kubenswrapper[4932]: E0218 19:50:22.835565 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-m86tn" podUID="9647c082-6b36-4f38-b1fb-663f095997e9" Feb 18 19:50:22 crc kubenswrapper[4932]: I0218 19:50:22.949912 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert\") pod \"infra-operator-controller-manager-66d6b5f488-dv4j7\" (UID: \"376e77a5-0e6f-4999-a037-96154984442f\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" Feb 18 19:50:22 crc kubenswrapper[4932]: E0218 19:50:22.950069 4932 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:50:22 crc kubenswrapper[4932]: E0218 19:50:22.950182 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert podName:376e77a5-0e6f-4999-a037-96154984442f nodeName:}" failed. No retries permitted until 2026-02-18 19:50:38.950145747 +0000 UTC m=+1002.532100602 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert") pod "infra-operator-controller-manager-66d6b5f488-dv4j7" (UID: "376e77a5-0e6f-4999-a037-96154984442f") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:50:23 crc kubenswrapper[4932]: E0218 19:50:23.278551 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:00e0076b910b180d2ee76f7fa74f058fd1e2bee9e313f3a87c5f84bdd2600e2a" Feb 18 19:50:23 crc kubenswrapper[4932]: E0218 19:50:23.278719 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:00e0076b910b180d2ee76f7fa74f058fd1e2bee9e313f3a87c5f84bdd2600e2a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vxs72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-54fb488b88-4m7xr_openstack-operators(3967efad-3234-435e-b755-f684ffd74918): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:50:23 crc kubenswrapper[4932]: E0218 19:50:23.279828 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-4m7xr" podUID="3967efad-3234-435e-b755-f684ffd74918" Feb 18 19:50:23 crc kubenswrapper[4932]: I0218 19:50:23.355976 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9\" (UID: \"73445d4e-349f-4e37-a75d-44949a14db73\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" Feb 18 19:50:23 crc kubenswrapper[4932]: I0218 19:50:23.363856 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/73445d4e-349f-4e37-a75d-44949a14db73-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9\" (UID: \"73445d4e-349f-4e37-a75d-44949a14db73\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" Feb 18 19:50:23 crc kubenswrapper[4932]: E0218 19:50:23.384516 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:9cb0b42ba1836ba4320a0a4660bfdeddea8c0685be379c0000dafb16398f4469\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-m86tn" podUID="9647c082-6b36-4f38-b1fb-663f095997e9" Feb 18 19:50:23 crc kubenswrapper[4932]: I0218 19:50:23.388325 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" Feb 18 19:50:23 crc kubenswrapper[4932]: E0218 19:50:23.393694 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:00e0076b910b180d2ee76f7fa74f058fd1e2bee9e313f3a87c5f84bdd2600e2a\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-4m7xr" podUID="3967efad-3234-435e-b755-f684ffd74918" Feb 18 19:50:23 crc kubenswrapper[4932]: I0218 19:50:23.766116 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:23 crc kubenswrapper[4932]: I0218 19:50:23.766531 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:23 crc kubenswrapper[4932]: I0218 19:50:23.772061 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-metrics-certs\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:23 crc kubenswrapper[4932]: I0218 19:50:23.775062 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6545794f-bb0e-4cb6-848b-436201e3af4f-webhook-certs\") pod \"openstack-operator-controller-manager-5ffbcbf949-cc86z\" (UID: \"6545794f-bb0e-4cb6-848b-436201e3af4f\") " pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:23 crc kubenswrapper[4932]: I0218 19:50:23.845534 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9"] Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.058091 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.403405 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-9595d6797-7ssxs" event={"ID":"a0fe77a1-c4a7-422f-b7c2-3062c2af1393","Type":"ContainerStarted","Data":"733a9d1a44e9a682b945b22ef5e79205bff20c260b3d0d97498686fbbc646da2"} Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.403778 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-9595d6797-7ssxs" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.405664 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-jpncb" event={"ID":"d0fc20a0-4c08-4552-be44-459c503d50c3","Type":"ContainerStarted","Data":"a545682e8d8422d1a3126d8cf777a1aea7727035549b1d20359506b1ade75484"} Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.406076 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-jpncb" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.407153 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-h2gnn" event={"ID":"9340dde2-09ac-43c0-ab0e-b2ce8ed53de0","Type":"ContainerStarted","Data":"681d9468c6e7659152849f2c97a567529e41c4387edcd23775f7036224273257"} Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.407516 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-h2gnn" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.413366 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-mp2bb" event={"ID":"33f4dcd6-0eea-40f3-9968-458594d82013","Type":"ContainerStarted","Data":"7fb1a3a85285deb683da88ce039aaabecb5959455c349532bbfbfc93a9df50cc"} Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.413460 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-mp2bb" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.417642 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-s8b9p" event={"ID":"27b507e8-a4b3-49cb-bef2-85a319a10257","Type":"ContainerStarted","Data":"af559c036304a1c6bcea9be8b47f693d13662c1daa5fa882c9026ad81a8f8abd"} Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.418240 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-s8b9p" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.424308 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-clwts" event={"ID":"d03b5e78-a45c-49aa-8915-336be03c8c94","Type":"ContainerStarted","Data":"7d2d42a4c0efebac458437ba4d27f1dd825f3649fbb53e22edd5be8a2590eb4c"} Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.425106 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-clwts" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.426736 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-9595d6797-7ssxs" podStartSLOduration=3.316008031 podStartE2EDuration="18.426716261s" podCreationTimestamp="2026-02-18 19:50:06 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.210828366 +0000 UTC m=+971.792783211" lastFinishedPulling="2026-02-18 19:50:23.321536596 +0000 UTC m=+986.903491441" observedRunningTime="2026-02-18 19:50:24.423634385 +0000 UTC m=+988.005589230" watchObservedRunningTime="2026-02-18 19:50:24.426716261 +0000 UTC m=+988.008671106" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.430888 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" event={"ID":"73445d4e-349f-4e37-a75d-44949a14db73","Type":"ContainerStarted","Data":"789734b0e05fca0142cb2c956b46e3d6f5ba46c8eff8bc6c672afee877d82ed5"} Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.432848 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-brmw7" event={"ID":"ffff0e6b-64e2-499f-8296-f374c5d62450","Type":"ContainerStarted","Data":"92e84309122038c81174fd17412a316dc294db8340cdc3dd56ab0af0b29a8ad1"} Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.433522 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-brmw7" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.437403 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-qqxpn" event={"ID":"6b690eeb-2e37-49d8-9f44-9ca086aa2f00","Type":"ContainerStarted","Data":"7079b445c0febb98258065446880f5675a69d5feb7711237795273fbe1fd642d"} Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.437569 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-qqxpn" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.443945 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-b46xh" event={"ID":"4f286c1e-d207-47a0-86be-6711856071a7","Type":"ContainerStarted","Data":"c4709d551b5fdefc852a7e39f4523756300d7ccd6a833f54b8cfd4efc562db03"} Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.445536 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-b46xh" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.447978 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-jpncb" podStartSLOduration=2.658469886 podStartE2EDuration="17.447957904s" podCreationTimestamp="2026-02-18 19:50:07 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.52641599 +0000 UTC m=+972.108370835" lastFinishedPulling="2026-02-18 19:50:23.315904018 +0000 UTC m=+986.897858853" observedRunningTime="2026-02-18 19:50:24.442953731 +0000 UTC m=+988.024908576" watchObservedRunningTime="2026-02-18 19:50:24.447957904 +0000 UTC m=+988.029912749" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.467744 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-s8b9p" podStartSLOduration=2.75196737 podStartE2EDuration="17.467727191s" podCreationTimestamp="2026-02-18 19:50:07 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.640154982 +0000 UTC m=+972.222109827" lastFinishedPulling="2026-02-18 19:50:23.355914813 +0000 UTC m=+986.937869648" observedRunningTime="2026-02-18 19:50:24.466540712 +0000 UTC m=+988.048495577" watchObservedRunningTime="2026-02-18 19:50:24.467727191 +0000 UTC m=+988.049682036" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.491290 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-mp2bb" podStartSLOduration=3.2271993820000002 podStartE2EDuration="18.491272831s" podCreationTimestamp="2026-02-18 19:50:06 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.05228067 +0000 UTC m=+971.634235515" lastFinishedPulling="2026-02-18 19:50:23.316354099 +0000 UTC m=+986.898308964" observedRunningTime="2026-02-18 19:50:24.487480298 +0000 UTC m=+988.069435143" watchObservedRunningTime="2026-02-18 19:50:24.491272831 +0000 UTC m=+988.073227676" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.510129 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-h2gnn" podStartSLOduration=2.977754532 podStartE2EDuration="17.510111845s" podCreationTimestamp="2026-02-18 19:50:07 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.784257222 +0000 UTC m=+972.366212067" lastFinishedPulling="2026-02-18 19:50:23.316614535 +0000 UTC m=+986.898569380" observedRunningTime="2026-02-18 19:50:24.509433889 +0000 UTC m=+988.091388744" watchObservedRunningTime="2026-02-18 19:50:24.510111845 +0000 UTC m=+988.092066690" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.523520 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-clwts" podStartSLOduration=3.166958339 podStartE2EDuration="18.523498785s" podCreationTimestamp="2026-02-18 19:50:06 +0000 UTC" firstStartedPulling="2026-02-18 19:50:07.955812214 +0000 UTC m=+971.537767059" lastFinishedPulling="2026-02-18 19:50:23.31235265 +0000 UTC m=+986.894307505" observedRunningTime="2026-02-18 19:50:24.523393163 +0000 UTC m=+988.105348018" watchObservedRunningTime="2026-02-18 19:50:24.523498785 +0000 UTC m=+988.105453630" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.578385 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-qqxpn" podStartSLOduration=3.496869636 podStartE2EDuration="18.578368247s" podCreationTimestamp="2026-02-18 19:50:06 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.234446928 +0000 UTC m=+971.816401773" lastFinishedPulling="2026-02-18 19:50:23.315945539 +0000 UTC m=+986.897900384" observedRunningTime="2026-02-18 19:50:24.57485308 +0000 UTC m=+988.156807925" watchObservedRunningTime="2026-02-18 19:50:24.578368247 +0000 UTC m=+988.160323092" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.597157 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-brmw7" podStartSLOduration=2.75116653 podStartE2EDuration="17.597132569s" podCreationTimestamp="2026-02-18 19:50:07 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.477013193 +0000 UTC m=+972.058968038" lastFinishedPulling="2026-02-18 19:50:23.322979232 +0000 UTC m=+986.904934077" observedRunningTime="2026-02-18 19:50:24.592868954 +0000 UTC m=+988.174823799" watchObservedRunningTime="2026-02-18 19:50:24.597132569 +0000 UTC m=+988.179087414" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.613930 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-b46xh" podStartSLOduration=3.348424709 podStartE2EDuration="18.613911222s" podCreationTimestamp="2026-02-18 19:50:06 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.050406494 +0000 UTC m=+971.632361339" lastFinishedPulling="2026-02-18 19:50:23.315892997 +0000 UTC m=+986.897847852" observedRunningTime="2026-02-18 19:50:24.613051821 +0000 UTC m=+988.195006666" watchObservedRunningTime="2026-02-18 19:50:24.613911222 +0000 UTC m=+988.195866077" Feb 18 19:50:24 crc kubenswrapper[4932]: I0218 19:50:24.633327 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z"] Feb 18 19:50:24 crc kubenswrapper[4932]: W0218 19:50:24.669073 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6545794f_bb0e_4cb6_848b_436201e3af4f.slice/crio-3b7e14e6736528fbf97324bcd2ee44eccaca9d1d0317b0426cca5f6afe9da258 WatchSource:0}: Error finding container 3b7e14e6736528fbf97324bcd2ee44eccaca9d1d0317b0426cca5f6afe9da258: Status 404 returned error can't find the container with id 3b7e14e6736528fbf97324bcd2ee44eccaca9d1d0317b0426cca5f6afe9da258 Feb 18 19:50:25 crc kubenswrapper[4932]: I0218 19:50:25.452633 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" event={"ID":"6545794f-bb0e-4cb6-848b-436201e3af4f","Type":"ContainerStarted","Data":"65cd089d368629d13519e4dc731dd3400379eb2e810a849e7e631a519fdba06b"} Feb 18 19:50:25 crc kubenswrapper[4932]: I0218 19:50:25.452880 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" event={"ID":"6545794f-bb0e-4cb6-848b-436201e3af4f","Type":"ContainerStarted","Data":"3b7e14e6736528fbf97324bcd2ee44eccaca9d1d0317b0426cca5f6afe9da258"} Feb 18 19:50:25 crc kubenswrapper[4932]: I0218 19:50:25.454089 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:25 crc kubenswrapper[4932]: I0218 19:50:25.488593 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" podStartSLOduration=18.488573229 podStartE2EDuration="18.488573229s" podCreationTimestamp="2026-02-18 19:50:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:50:25.477208849 +0000 UTC m=+989.059163714" watchObservedRunningTime="2026-02-18 19:50:25.488573229 +0000 UTC m=+989.070528074" Feb 18 19:50:34 crc kubenswrapper[4932]: I0218 19:50:34.074586 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5ffbcbf949-cc86z" Feb 18 19:50:37 crc kubenswrapper[4932]: I0218 19:50:37.111141 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-clwts" Feb 18 19:50:37 crc kubenswrapper[4932]: I0218 19:50:37.145764 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-mp2bb" Feb 18 19:50:37 crc kubenswrapper[4932]: I0218 19:50:37.199523 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-b46xh" Feb 18 19:50:37 crc kubenswrapper[4932]: I0218 19:50:37.239715 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-9595d6797-7ssxs" Feb 18 19:50:37 crc kubenswrapper[4932]: I0218 19:50:37.423483 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-qqxpn" Feb 18 19:50:37 crc kubenswrapper[4932]: I0218 19:50:37.590073 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-brmw7" Feb 18 19:50:37 crc kubenswrapper[4932]: I0218 19:50:37.738121 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-jpncb" Feb 18 19:50:37 crc kubenswrapper[4932]: I0218 19:50:37.738293 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-s8b9p" Feb 18 19:50:37 crc kubenswrapper[4932]: I0218 19:50:37.840428 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-h2gnn" Feb 18 19:50:39 crc kubenswrapper[4932]: I0218 19:50:39.000236 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert\") pod \"infra-operator-controller-manager-66d6b5f488-dv4j7\" (UID: \"376e77a5-0e6f-4999-a037-96154984442f\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" Feb 18 19:50:39 crc kubenswrapper[4932]: I0218 19:50:39.005898 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/376e77a5-0e6f-4999-a037-96154984442f-cert\") pod \"infra-operator-controller-manager-66d6b5f488-dv4j7\" (UID: \"376e77a5-0e6f-4999-a037-96154984442f\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" Feb 18 19:50:39 crc kubenswrapper[4932]: I0218 19:50:39.118841 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" Feb 18 19:50:42 crc kubenswrapper[4932]: E0218 19:50:42.845546 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:d800f1288d1517d84a45ddd475c3c0b4e8686fd900c9edf1e20b662b15218b89" Feb 18 19:50:42 crc kubenswrapper[4932]: E0218 19:50:42.846067 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d800f1288d1517d84a45ddd475c3c0b4e8686fd900c9edf1e20b662b15218b89,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gq9mv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-57bd55f9b7-t8b9r_openstack-operators(d09a2660-c1e2-4305-b601-f9fb39b12ed9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:50:42 crc kubenswrapper[4932]: E0218 19:50:42.847245 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r" podUID="d09a2660-c1e2-4305-b601-f9fb39b12ed9" Feb 18 19:50:44 crc kubenswrapper[4932]: E0218 19:50:44.720503 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:a5f362e48eb379fd891a28080673947763f8103f443f08a01d13cd09a3123e4d" Feb 18 19:50:44 crc kubenswrapper[4932]: E0218 19:50:44.720991 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:a5f362e48eb379fd891a28080673947763f8103f443f08a01d13cd09a3123e4d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ktdsl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-57746b5ff9-56fbf_openstack-operators(59af7bc1-7774-4102-ae6c-2d7f820d3b93): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:50:44 crc kubenswrapper[4932]: E0218 19:50:44.722295 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56fbf" podUID="59af7bc1-7774-4102-ae6c-2d7f820d3b93" Feb 18 19:50:45 crc kubenswrapper[4932]: I0218 19:50:45.776278 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7"] Feb 18 19:50:45 crc kubenswrapper[4932]: W0218 19:50:45.810660 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod376e77a5_0e6f_4999_a037_96154984442f.slice/crio-b1c697ac065f55c0a882531ee8f3b109cc02940db50cd305170f993a7a4f767a WatchSource:0}: Error finding container b1c697ac065f55c0a882531ee8f3b109cc02940db50cd305170f993a7a4f767a: Status 404 returned error can't find the container with id b1c697ac065f55c0a882531ee8f3b109cc02940db50cd305170f993a7a4f767a Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.611249 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d5brc" event={"ID":"7d117b07-cdb8-4d98-bd18-87d6511259af","Type":"ContainerStarted","Data":"de167f08b4b55de274dc6521f592f3dee703e3c67b177dc47133e5bf08bd181e"} Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.614143 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" event={"ID":"73445d4e-349f-4e37-a75d-44949a14db73","Type":"ContainerStarted","Data":"4408289fc9349fb32cacbd1a2cafce9967888a2b8961294417d56dbafadba8b6"} Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.614229 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.615784 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wt2rd" event={"ID":"4f1057b4-de48-4123-986c-795f9957899a","Type":"ContainerStarted","Data":"81107fe48eaa2a2b6ce420585d8ab6381c6a6a8bd5f76fa2f82145d17ba0a2a0"} Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.615963 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wt2rd" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.617080 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" event={"ID":"376e77a5-0e6f-4999-a037-96154984442f","Type":"ContainerStarted","Data":"b1c697ac065f55c0a882531ee8f3b109cc02940db50cd305170f993a7a4f767a"} Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.618329 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-ts8dz" event={"ID":"6565f17b-d11e-4f28-bc32-f6e43062f81b","Type":"ContainerStarted","Data":"5ee2c195f8d7176fe0436f657cd8c7df8fd514e879d3c3b20dd00948fb14e37f"} Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.618497 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-ts8dz" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.619303 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6k58x" event={"ID":"191cd867-8aef-41cd-ae38-18b08d073f5d","Type":"ContainerStarted","Data":"2a1414219dedda1cca9406a393bf90f333fd52ca579e9ff594aeedf7547c234d"} Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.619476 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6k58x" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.620433 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-4hrhw" event={"ID":"4eb6df58-4273-41ac-8d6d-34d04a30adef","Type":"ContainerStarted","Data":"47b401148e026022a236089acf4941c77a21fd3f60222aed4b784b68f38d3642"} Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.620647 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-4hrhw" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.622287 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-rw4dl" event={"ID":"52b91f42-32e6-4e15-887f-56098da3900b","Type":"ContainerStarted","Data":"f2402a314bbb251296fb89c887ae805e1cdaa206060223e8555ac386b731d163"} Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.622416 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-rw4dl" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.624610 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-d5brc" podStartSLOduration=3.183036819 podStartE2EDuration="39.624600548s" podCreationTimestamp="2026-02-18 19:50:07 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.916143931 +0000 UTC m=+972.498098776" lastFinishedPulling="2026-02-18 19:50:45.35770762 +0000 UTC m=+1008.939662505" observedRunningTime="2026-02-18 19:50:46.623734997 +0000 UTC m=+1010.205689842" watchObservedRunningTime="2026-02-18 19:50:46.624600548 +0000 UTC m=+1010.206555393" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.630507 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-4m7xr" event={"ID":"3967efad-3234-435e-b755-f684ffd74918","Type":"ContainerStarted","Data":"73a61ca2035c140cb774984c28e792f84f9f9126a13c137b4f50b161e00b88da"} Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.630739 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-4m7xr" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.632514 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7fcbb7ddf5-xlhwm" event={"ID":"9f1309cd-f84d-48a6-a8bc-fd4f70307c12","Type":"ContainerStarted","Data":"c30ca0181129682e02046bbab9351f2a35066687e2829ea3bcb6252fe033efd9"} Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.632802 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-7fcbb7ddf5-xlhwm" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.634186 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-m86tn" event={"ID":"9647c082-6b36-4f38-b1fb-663f095997e9","Type":"ContainerStarted","Data":"18158043147a1c6fc3f911389f310dbde99162b08c2c7593ece0c2f94d0c4a9a"} Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.634383 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-m86tn" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.660838 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wt2rd" podStartSLOduration=5.72221832 podStartE2EDuration="39.66081715s" podCreationTimestamp="2026-02-18 19:50:07 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.640430979 +0000 UTC m=+972.222385824" lastFinishedPulling="2026-02-18 19:50:42.579029809 +0000 UTC m=+1006.160984654" observedRunningTime="2026-02-18 19:50:46.638987913 +0000 UTC m=+1010.220942758" watchObservedRunningTime="2026-02-18 19:50:46.66081715 +0000 UTC m=+1010.242771995" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.676510 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" podStartSLOduration=18.19104588 podStartE2EDuration="39.676486976s" podCreationTimestamp="2026-02-18 19:50:07 +0000 UTC" firstStartedPulling="2026-02-18 19:50:23.860101824 +0000 UTC m=+987.442056669" lastFinishedPulling="2026-02-18 19:50:45.34554288 +0000 UTC m=+1008.927497765" observedRunningTime="2026-02-18 19:50:46.675092822 +0000 UTC m=+1010.257047677" watchObservedRunningTime="2026-02-18 19:50:46.676486976 +0000 UTC m=+1010.258441831" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.699285 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-4hrhw" podStartSLOduration=2.822817026 podStartE2EDuration="39.699250757s" podCreationTimestamp="2026-02-18 19:50:07 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.481216647 +0000 UTC m=+972.063171492" lastFinishedPulling="2026-02-18 19:50:45.357650368 +0000 UTC m=+1008.939605223" observedRunningTime="2026-02-18 19:50:46.69816101 +0000 UTC m=+1010.280115855" watchObservedRunningTime="2026-02-18 19:50:46.699250757 +0000 UTC m=+1010.281205602" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.716245 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-rw4dl" podStartSLOduration=14.759504586 podStartE2EDuration="39.716228915s" podCreationTimestamp="2026-02-18 19:50:07 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.815376569 +0000 UTC m=+972.397331414" lastFinishedPulling="2026-02-18 19:50:33.772100898 +0000 UTC m=+997.354055743" observedRunningTime="2026-02-18 19:50:46.71152528 +0000 UTC m=+1010.293480125" watchObservedRunningTime="2026-02-18 19:50:46.716228915 +0000 UTC m=+1010.298183760" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.731624 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6k58x" podStartSLOduration=2.76489005 podStartE2EDuration="39.731607354s" podCreationTimestamp="2026-02-18 19:50:07 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.637049236 +0000 UTC m=+972.219004081" lastFinishedPulling="2026-02-18 19:50:45.60376655 +0000 UTC m=+1009.185721385" observedRunningTime="2026-02-18 19:50:46.730448476 +0000 UTC m=+1010.312403321" watchObservedRunningTime="2026-02-18 19:50:46.731607354 +0000 UTC m=+1010.313562199" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.746531 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-ts8dz" podStartSLOduration=3.275944509 podStartE2EDuration="39.746517372s" podCreationTimestamp="2026-02-18 19:50:07 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.880210826 +0000 UTC m=+972.462165671" lastFinishedPulling="2026-02-18 19:50:45.350783679 +0000 UTC m=+1008.932738534" observedRunningTime="2026-02-18 19:50:46.743478227 +0000 UTC m=+1010.325433062" watchObservedRunningTime="2026-02-18 19:50:46.746517372 +0000 UTC m=+1010.328472217" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.766663 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-4m7xr" podStartSLOduration=3.39224931 podStartE2EDuration="40.766647808s" podCreationTimestamp="2026-02-18 19:50:06 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.246984376 +0000 UTC m=+971.828939221" lastFinishedPulling="2026-02-18 19:50:45.621382854 +0000 UTC m=+1009.203337719" observedRunningTime="2026-02-18 19:50:46.763479839 +0000 UTC m=+1010.345434684" watchObservedRunningTime="2026-02-18 19:50:46.766647808 +0000 UTC m=+1010.348602653" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.809266 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-7fcbb7ddf5-xlhwm" podStartSLOduration=3.276176263 podStartE2EDuration="39.809251937s" podCreationTimestamp="2026-02-18 19:50:07 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.815286736 +0000 UTC m=+972.397241571" lastFinishedPulling="2026-02-18 19:50:45.34836239 +0000 UTC m=+1008.930317245" observedRunningTime="2026-02-18 19:50:46.802983653 +0000 UTC m=+1010.384938498" watchObservedRunningTime="2026-02-18 19:50:46.809251937 +0000 UTC m=+1010.391206782" Feb 18 19:50:46 crc kubenswrapper[4932]: I0218 19:50:46.818980 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-m86tn" podStartSLOduration=2.762814567 podStartE2EDuration="39.818964926s" podCreationTimestamp="2026-02-18 19:50:07 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.405744938 +0000 UTC m=+971.987699783" lastFinishedPulling="2026-02-18 19:50:45.461895297 +0000 UTC m=+1009.043850142" observedRunningTime="2026-02-18 19:50:46.816394323 +0000 UTC m=+1010.398349178" watchObservedRunningTime="2026-02-18 19:50:46.818964926 +0000 UTC m=+1010.400919771" Feb 18 19:50:48 crc kubenswrapper[4932]: I0218 19:50:48.653117 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" event={"ID":"376e77a5-0e6f-4999-a037-96154984442f","Type":"ContainerStarted","Data":"ea0442f6ccccd1eb24ac1bc1ae00a7a29b4eb24d2c1d407438f6f49e47cdaeb0"} Feb 18 19:50:48 crc kubenswrapper[4932]: I0218 19:50:48.653665 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" Feb 18 19:50:48 crc kubenswrapper[4932]: I0218 19:50:48.671886 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" podStartSLOduration=40.193013356 podStartE2EDuration="42.671868312s" podCreationTimestamp="2026-02-18 19:50:06 +0000 UTC" firstStartedPulling="2026-02-18 19:50:45.827472651 +0000 UTC m=+1009.409427496" lastFinishedPulling="2026-02-18 19:50:48.306327567 +0000 UTC m=+1011.888282452" observedRunningTime="2026-02-18 19:50:48.669058613 +0000 UTC m=+1012.251013468" watchObservedRunningTime="2026-02-18 19:50:48.671868312 +0000 UTC m=+1012.253823177" Feb 18 19:50:53 crc kubenswrapper[4932]: I0218 19:50:53.397705 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-k67j9" Feb 18 19:50:54 crc kubenswrapper[4932]: E0218 19:50:54.182041 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d800f1288d1517d84a45ddd475c3c0b4e8686fd900c9edf1e20b662b15218b89\\\"\"" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r" podUID="d09a2660-c1e2-4305-b601-f9fb39b12ed9" Feb 18 19:50:57 crc kubenswrapper[4932]: I0218 19:50:57.581478 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-4m7xr" Feb 18 19:50:57 crc kubenswrapper[4932]: I0218 19:50:57.639749 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-m86tn" Feb 18 19:50:57 crc kubenswrapper[4932]: I0218 19:50:57.678694 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-4hrhw" Feb 18 19:50:57 crc kubenswrapper[4932]: I0218 19:50:57.737133 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wt2rd" Feb 18 19:50:57 crc kubenswrapper[4932]: I0218 19:50:57.827783 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6k58x" Feb 18 19:50:57 crc kubenswrapper[4932]: I0218 19:50:57.943690 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-rw4dl" Feb 18 19:50:57 crc kubenswrapper[4932]: I0218 19:50:57.950437 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-ts8dz" Feb 18 19:50:57 crc kubenswrapper[4932]: I0218 19:50:57.970543 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-7fcbb7ddf5-xlhwm" Feb 18 19:50:59 crc kubenswrapper[4932]: I0218 19:50:59.126343 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-dv4j7" Feb 18 19:51:00 crc kubenswrapper[4932]: E0218 19:51:00.181990 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:a5f362e48eb379fd891a28080673947763f8103f443f08a01d13cd09a3123e4d\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56fbf" podUID="59af7bc1-7774-4102-ae6c-2d7f820d3b93" Feb 18 19:51:08 crc kubenswrapper[4932]: I0218 19:51:08.817379 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r" event={"ID":"d09a2660-c1e2-4305-b601-f9fb39b12ed9","Type":"ContainerStarted","Data":"dcb57fcd46fef995fbf922fe40b2c085bb9b037a023c91fd3d7ffc176245ba93"} Feb 18 19:51:08 crc kubenswrapper[4932]: I0218 19:51:08.818142 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r" Feb 18 19:51:08 crc kubenswrapper[4932]: I0218 19:51:08.838601 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r" podStartSLOduration=2.761789623 podStartE2EDuration="1m1.838583113s" podCreationTimestamp="2026-02-18 19:50:07 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.798303128 +0000 UTC m=+972.380257973" lastFinishedPulling="2026-02-18 19:51:07.875096578 +0000 UTC m=+1031.457051463" observedRunningTime="2026-02-18 19:51:08.836916232 +0000 UTC m=+1032.418871087" watchObservedRunningTime="2026-02-18 19:51:08.838583113 +0000 UTC m=+1032.420537958" Feb 18 19:51:11 crc kubenswrapper[4932]: I0218 19:51:11.853437 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56fbf" event={"ID":"59af7bc1-7774-4102-ae6c-2d7f820d3b93","Type":"ContainerStarted","Data":"fb40dce15dd57eb5455d8ea0d14849574a9a7c8e57b6995459fa9f81a80fc3a2"} Feb 18 19:51:11 crc kubenswrapper[4932]: I0218 19:51:11.853962 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56fbf" Feb 18 19:51:11 crc kubenswrapper[4932]: I0218 19:51:11.878341 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56fbf" podStartSLOduration=2.341153556 podStartE2EDuration="1m5.878319595s" podCreationTimestamp="2026-02-18 19:50:06 +0000 UTC" firstStartedPulling="2026-02-18 19:50:08.116957883 +0000 UTC m=+971.698912728" lastFinishedPulling="2026-02-18 19:51:11.654123922 +0000 UTC m=+1035.236078767" observedRunningTime="2026-02-18 19:51:11.872572483 +0000 UTC m=+1035.454527338" watchObservedRunningTime="2026-02-18 19:51:11.878319595 +0000 UTC m=+1035.460274450" Feb 18 19:51:17 crc kubenswrapper[4932]: I0218 19:51:17.125942 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56fbf" Feb 18 19:51:17 crc kubenswrapper[4932]: I0218 19:51:17.843049 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-t8b9r" Feb 18 19:51:35 crc kubenswrapper[4932]: I0218 19:51:35.987076 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d46db5bb7-js9zs"] Feb 18 19:51:35 crc kubenswrapper[4932]: I0218 19:51:35.990519 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d46db5bb7-js9zs" Feb 18 19:51:35 crc kubenswrapper[4932]: I0218 19:51:35.994012 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 18 19:51:35 crc kubenswrapper[4932]: I0218 19:51:35.994073 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 18 19:51:35 crc kubenswrapper[4932]: I0218 19:51:35.994305 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 18 19:51:35 crc kubenswrapper[4932]: I0218 19:51:35.994741 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-7qdlp" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.002103 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d46db5bb7-js9zs"] Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.021639 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59c78cff8f-mnmbx"] Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.022975 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c78cff8f-mnmbx" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.025857 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.036143 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59c78cff8f-mnmbx"] Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.052801 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fccb0fa8-b88d-469c-b88e-838aa9f5d481-config\") pod \"dnsmasq-dns-5d46db5bb7-js9zs\" (UID: \"fccb0fa8-b88d-469c-b88e-838aa9f5d481\") " pod="openstack/dnsmasq-dns-5d46db5bb7-js9zs" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.052883 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4k8g\" (UniqueName: \"kubernetes.io/projected/fccb0fa8-b88d-469c-b88e-838aa9f5d481-kube-api-access-b4k8g\") pod \"dnsmasq-dns-5d46db5bb7-js9zs\" (UID: \"fccb0fa8-b88d-469c-b88e-838aa9f5d481\") " pod="openstack/dnsmasq-dns-5d46db5bb7-js9zs" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.153855 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-dns-svc\") pod \"dnsmasq-dns-59c78cff8f-mnmbx\" (UID: \"9bac8c90-8ad0-4e01-8434-92f4bc659e1d\") " pod="openstack/dnsmasq-dns-59c78cff8f-mnmbx" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.154002 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fccb0fa8-b88d-469c-b88e-838aa9f5d481-config\") pod \"dnsmasq-dns-5d46db5bb7-js9zs\" (UID: \"fccb0fa8-b88d-469c-b88e-838aa9f5d481\") " pod="openstack/dnsmasq-dns-5d46db5bb7-js9zs" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.154141 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxps7\" (UniqueName: \"kubernetes.io/projected/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-kube-api-access-jxps7\") pod \"dnsmasq-dns-59c78cff8f-mnmbx\" (UID: \"9bac8c90-8ad0-4e01-8434-92f4bc659e1d\") " pod="openstack/dnsmasq-dns-59c78cff8f-mnmbx" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.154256 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4k8g\" (UniqueName: \"kubernetes.io/projected/fccb0fa8-b88d-469c-b88e-838aa9f5d481-kube-api-access-b4k8g\") pod \"dnsmasq-dns-5d46db5bb7-js9zs\" (UID: \"fccb0fa8-b88d-469c-b88e-838aa9f5d481\") " pod="openstack/dnsmasq-dns-5d46db5bb7-js9zs" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.154325 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-config\") pod \"dnsmasq-dns-59c78cff8f-mnmbx\" (UID: \"9bac8c90-8ad0-4e01-8434-92f4bc659e1d\") " pod="openstack/dnsmasq-dns-59c78cff8f-mnmbx" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.155226 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fccb0fa8-b88d-469c-b88e-838aa9f5d481-config\") pod \"dnsmasq-dns-5d46db5bb7-js9zs\" (UID: \"fccb0fa8-b88d-469c-b88e-838aa9f5d481\") " pod="openstack/dnsmasq-dns-5d46db5bb7-js9zs" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.173009 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4k8g\" (UniqueName: \"kubernetes.io/projected/fccb0fa8-b88d-469c-b88e-838aa9f5d481-kube-api-access-b4k8g\") pod \"dnsmasq-dns-5d46db5bb7-js9zs\" (UID: \"fccb0fa8-b88d-469c-b88e-838aa9f5d481\") " pod="openstack/dnsmasq-dns-5d46db5bb7-js9zs" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.255132 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxps7\" (UniqueName: \"kubernetes.io/projected/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-kube-api-access-jxps7\") pod \"dnsmasq-dns-59c78cff8f-mnmbx\" (UID: \"9bac8c90-8ad0-4e01-8434-92f4bc659e1d\") " pod="openstack/dnsmasq-dns-59c78cff8f-mnmbx" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.255228 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-config\") pod \"dnsmasq-dns-59c78cff8f-mnmbx\" (UID: \"9bac8c90-8ad0-4e01-8434-92f4bc659e1d\") " pod="openstack/dnsmasq-dns-59c78cff8f-mnmbx" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.255251 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-dns-svc\") pod \"dnsmasq-dns-59c78cff8f-mnmbx\" (UID: \"9bac8c90-8ad0-4e01-8434-92f4bc659e1d\") " pod="openstack/dnsmasq-dns-59c78cff8f-mnmbx" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.256047 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-config\") pod \"dnsmasq-dns-59c78cff8f-mnmbx\" (UID: \"9bac8c90-8ad0-4e01-8434-92f4bc659e1d\") " pod="openstack/dnsmasq-dns-59c78cff8f-mnmbx" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.256090 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-dns-svc\") pod \"dnsmasq-dns-59c78cff8f-mnmbx\" (UID: \"9bac8c90-8ad0-4e01-8434-92f4bc659e1d\") " pod="openstack/dnsmasq-dns-59c78cff8f-mnmbx" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.272440 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxps7\" (UniqueName: \"kubernetes.io/projected/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-kube-api-access-jxps7\") pod \"dnsmasq-dns-59c78cff8f-mnmbx\" (UID: \"9bac8c90-8ad0-4e01-8434-92f4bc659e1d\") " pod="openstack/dnsmasq-dns-59c78cff8f-mnmbx" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.357465 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d46db5bb7-js9zs" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.365941 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c78cff8f-mnmbx" Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.732223 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d46db5bb7-js9zs"] Feb 18 19:51:36 crc kubenswrapper[4932]: I0218 19:51:36.767047 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59c78cff8f-mnmbx"] Feb 18 19:51:36 crc kubenswrapper[4932]: W0218 19:51:36.772336 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bac8c90_8ad0_4e01_8434_92f4bc659e1d.slice/crio-2d0b3b915f083e7565ab24eb11a22c59b965e4e0d82849dbc4f9e1e4e3b64b15 WatchSource:0}: Error finding container 2d0b3b915f083e7565ab24eb11a22c59b965e4e0d82849dbc4f9e1e4e3b64b15: Status 404 returned error can't find the container with id 2d0b3b915f083e7565ab24eb11a22c59b965e4e0d82849dbc4f9e1e4e3b64b15 Feb 18 19:51:37 crc kubenswrapper[4932]: I0218 19:51:37.055240 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c78cff8f-mnmbx" event={"ID":"9bac8c90-8ad0-4e01-8434-92f4bc659e1d","Type":"ContainerStarted","Data":"2d0b3b915f083e7565ab24eb11a22c59b965e4e0d82849dbc4f9e1e4e3b64b15"} Feb 18 19:51:37 crc kubenswrapper[4932]: I0218 19:51:37.056145 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d46db5bb7-js9zs" event={"ID":"fccb0fa8-b88d-469c-b88e-838aa9f5d481","Type":"ContainerStarted","Data":"46532bab9d5422ae97530391ba7e12cbc323bc5e7eec881c2be6645f3ff80478"} Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.543342 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59c78cff8f-mnmbx"] Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.569752 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57dc99974f-qvkx9"] Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.571220 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57dc99974f-qvkx9" Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.580623 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57dc99974f-qvkx9"] Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.750310 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab397921-9519-48e8-a5c0-5c388d54b6cd-dns-svc\") pod \"dnsmasq-dns-57dc99974f-qvkx9\" (UID: \"ab397921-9519-48e8-a5c0-5c388d54b6cd\") " pod="openstack/dnsmasq-dns-57dc99974f-qvkx9" Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.750412 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j8m6\" (UniqueName: \"kubernetes.io/projected/ab397921-9519-48e8-a5c0-5c388d54b6cd-kube-api-access-5j8m6\") pod \"dnsmasq-dns-57dc99974f-qvkx9\" (UID: \"ab397921-9519-48e8-a5c0-5c388d54b6cd\") " pod="openstack/dnsmasq-dns-57dc99974f-qvkx9" Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.750578 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab397921-9519-48e8-a5c0-5c388d54b6cd-config\") pod \"dnsmasq-dns-57dc99974f-qvkx9\" (UID: \"ab397921-9519-48e8-a5c0-5c388d54b6cd\") " pod="openstack/dnsmasq-dns-57dc99974f-qvkx9" Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.852957 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab397921-9519-48e8-a5c0-5c388d54b6cd-dns-svc\") pod \"dnsmasq-dns-57dc99974f-qvkx9\" (UID: \"ab397921-9519-48e8-a5c0-5c388d54b6cd\") " pod="openstack/dnsmasq-dns-57dc99974f-qvkx9" Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.853007 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j8m6\" (UniqueName: \"kubernetes.io/projected/ab397921-9519-48e8-a5c0-5c388d54b6cd-kube-api-access-5j8m6\") pod \"dnsmasq-dns-57dc99974f-qvkx9\" (UID: \"ab397921-9519-48e8-a5c0-5c388d54b6cd\") " pod="openstack/dnsmasq-dns-57dc99974f-qvkx9" Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.853045 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab397921-9519-48e8-a5c0-5c388d54b6cd-config\") pod \"dnsmasq-dns-57dc99974f-qvkx9\" (UID: \"ab397921-9519-48e8-a5c0-5c388d54b6cd\") " pod="openstack/dnsmasq-dns-57dc99974f-qvkx9" Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.854072 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab397921-9519-48e8-a5c0-5c388d54b6cd-dns-svc\") pod \"dnsmasq-dns-57dc99974f-qvkx9\" (UID: \"ab397921-9519-48e8-a5c0-5c388d54b6cd\") " pod="openstack/dnsmasq-dns-57dc99974f-qvkx9" Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.854513 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab397921-9519-48e8-a5c0-5c388d54b6cd-config\") pod \"dnsmasq-dns-57dc99974f-qvkx9\" (UID: \"ab397921-9519-48e8-a5c0-5c388d54b6cd\") " pod="openstack/dnsmasq-dns-57dc99974f-qvkx9" Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.865541 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d46db5bb7-js9zs"] Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.883806 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j8m6\" (UniqueName: \"kubernetes.io/projected/ab397921-9519-48e8-a5c0-5c388d54b6cd-kube-api-access-5j8m6\") pod \"dnsmasq-dns-57dc99974f-qvkx9\" (UID: \"ab397921-9519-48e8-a5c0-5c388d54b6cd\") " pod="openstack/dnsmasq-dns-57dc99974f-qvkx9" Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.919840 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b9746b6c-vpbf8"] Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.923552 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.942780 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57dc99974f-qvkx9" Feb 18 19:51:39 crc kubenswrapper[4932]: I0218 19:51:39.947053 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b9746b6c-vpbf8"] Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.057153 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca226f67-28b6-4585-a6ed-7d4394cc2a15-config\") pod \"dnsmasq-dns-7b9746b6c-vpbf8\" (UID: \"ca226f67-28b6-4585-a6ed-7d4394cc2a15\") " pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.057226 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgghm\" (UniqueName: \"kubernetes.io/projected/ca226f67-28b6-4585-a6ed-7d4394cc2a15-kube-api-access-jgghm\") pod \"dnsmasq-dns-7b9746b6c-vpbf8\" (UID: \"ca226f67-28b6-4585-a6ed-7d4394cc2a15\") " pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.057257 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca226f67-28b6-4585-a6ed-7d4394cc2a15-dns-svc\") pod \"dnsmasq-dns-7b9746b6c-vpbf8\" (UID: \"ca226f67-28b6-4585-a6ed-7d4394cc2a15\") " pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.158399 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca226f67-28b6-4585-a6ed-7d4394cc2a15-dns-svc\") pod \"dnsmasq-dns-7b9746b6c-vpbf8\" (UID: \"ca226f67-28b6-4585-a6ed-7d4394cc2a15\") " pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.158491 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca226f67-28b6-4585-a6ed-7d4394cc2a15-config\") pod \"dnsmasq-dns-7b9746b6c-vpbf8\" (UID: \"ca226f67-28b6-4585-a6ed-7d4394cc2a15\") " pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.158533 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgghm\" (UniqueName: \"kubernetes.io/projected/ca226f67-28b6-4585-a6ed-7d4394cc2a15-kube-api-access-jgghm\") pod \"dnsmasq-dns-7b9746b6c-vpbf8\" (UID: \"ca226f67-28b6-4585-a6ed-7d4394cc2a15\") " pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.159531 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca226f67-28b6-4585-a6ed-7d4394cc2a15-dns-svc\") pod \"dnsmasq-dns-7b9746b6c-vpbf8\" (UID: \"ca226f67-28b6-4585-a6ed-7d4394cc2a15\") " pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.160033 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca226f67-28b6-4585-a6ed-7d4394cc2a15-config\") pod \"dnsmasq-dns-7b9746b6c-vpbf8\" (UID: \"ca226f67-28b6-4585-a6ed-7d4394cc2a15\") " pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.183168 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgghm\" (UniqueName: \"kubernetes.io/projected/ca226f67-28b6-4585-a6ed-7d4394cc2a15-kube-api-access-jgghm\") pod \"dnsmasq-dns-7b9746b6c-vpbf8\" (UID: \"ca226f67-28b6-4585-a6ed-7d4394cc2a15\") " pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.245830 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.256118 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57dc99974f-qvkx9"] Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.280891 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-668d7c8657-fkpfr"] Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.285548 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.317538 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-668d7c8657-fkpfr"] Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.464289 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7182e8ba-c70f-44ce-b628-21107829cb83-dns-svc\") pod \"dnsmasq-dns-668d7c8657-fkpfr\" (UID: \"7182e8ba-c70f-44ce-b628-21107829cb83\") " pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.464616 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7182e8ba-c70f-44ce-b628-21107829cb83-config\") pod \"dnsmasq-dns-668d7c8657-fkpfr\" (UID: \"7182e8ba-c70f-44ce-b628-21107829cb83\") " pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.464660 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t97ph\" (UniqueName: \"kubernetes.io/projected/7182e8ba-c70f-44ce-b628-21107829cb83-kube-api-access-t97ph\") pod \"dnsmasq-dns-668d7c8657-fkpfr\" (UID: \"7182e8ba-c70f-44ce-b628-21107829cb83\") " pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.565704 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7182e8ba-c70f-44ce-b628-21107829cb83-dns-svc\") pod \"dnsmasq-dns-668d7c8657-fkpfr\" (UID: \"7182e8ba-c70f-44ce-b628-21107829cb83\") " pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.565780 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7182e8ba-c70f-44ce-b628-21107829cb83-config\") pod \"dnsmasq-dns-668d7c8657-fkpfr\" (UID: \"7182e8ba-c70f-44ce-b628-21107829cb83\") " pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.565829 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t97ph\" (UniqueName: \"kubernetes.io/projected/7182e8ba-c70f-44ce-b628-21107829cb83-kube-api-access-t97ph\") pod \"dnsmasq-dns-668d7c8657-fkpfr\" (UID: \"7182e8ba-c70f-44ce-b628-21107829cb83\") " pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.566543 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7182e8ba-c70f-44ce-b628-21107829cb83-dns-svc\") pod \"dnsmasq-dns-668d7c8657-fkpfr\" (UID: \"7182e8ba-c70f-44ce-b628-21107829cb83\") " pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.566549 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7182e8ba-c70f-44ce-b628-21107829cb83-config\") pod \"dnsmasq-dns-668d7c8657-fkpfr\" (UID: \"7182e8ba-c70f-44ce-b628-21107829cb83\") " pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.598041 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t97ph\" (UniqueName: \"kubernetes.io/projected/7182e8ba-c70f-44ce-b628-21107829cb83-kube-api-access-t97ph\") pod \"dnsmasq-dns-668d7c8657-fkpfr\" (UID: \"7182e8ba-c70f-44ce-b628-21107829cb83\") " pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.619215 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.719820 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.721286 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.728857 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.729152 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.729447 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.729537 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ptcgt" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.729856 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.729989 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.730724 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.738482 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.869711 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.869759 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.869780 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.869805 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.869821 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.869857 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.869876 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-config-data\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.869908 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.869926 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.869960 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.869975 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dtlp\" (UniqueName: \"kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-kube-api-access-2dtlp\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.971109 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.971186 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-config-data\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.971236 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.971262 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.971313 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.971335 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dtlp\" (UniqueName: \"kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-kube-api-access-2dtlp\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.971362 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.971400 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.971422 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.971456 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.971478 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.971661 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.971741 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.972008 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.972287 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-config-data\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.972817 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.974060 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.974639 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.975159 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.975636 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.979374 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.990519 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:40 crc kubenswrapper[4932]: I0218 19:51:40.993037 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dtlp\" (UniqueName: \"kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-kube-api-access-2dtlp\") pod \"rabbitmq-server-0\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " pod="openstack/rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.094629 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.095966 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.097595 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.097759 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.098137 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.098838 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.098945 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.099065 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.099290 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-l229h" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.110121 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.112951 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.173772 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.173840 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.173897 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cd547864-4d03-45ae-8bb1-10a360d36599-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.173981 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.174016 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cd547864-4d03-45ae-8bb1-10a360d36599-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.174046 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.174081 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.174116 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.174236 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.174285 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.174355 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwqrq\" (UniqueName: \"kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-kube-api-access-fwqrq\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.276072 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.276516 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.276666 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwqrq\" (UniqueName: \"kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-kube-api-access-fwqrq\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.276784 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.276881 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.276986 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cd547864-4d03-45ae-8bb1-10a360d36599-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.277133 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.277298 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.277307 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cd547864-4d03-45ae-8bb1-10a360d36599-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.277428 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.277480 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.277509 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.278026 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.279702 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.281272 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.282886 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.283808 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.284917 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cd547864-4d03-45ae-8bb1-10a360d36599-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.291018 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cd547864-4d03-45ae-8bb1-10a360d36599-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.291658 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.294685 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.295872 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.303487 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwqrq\" (UniqueName: \"kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-kube-api-access-fwqrq\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.422866 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.423006 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.423995 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.427914 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-notifications-rabbitmq-svc" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.428126 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-server-dockercfg-jc7nx" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.428283 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-default-user" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.428508 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-config-data" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.428526 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-plugins-conf" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.428626 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-erlang-cookie" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.428787 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-server-conf" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.442440 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.480533 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4a133994-7b33-4db4-a923-5b90d51e47b9-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.480651 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4a133994-7b33-4db4-a923-5b90d51e47b9-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.480825 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4a133994-7b33-4db4-a923-5b90d51e47b9-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.480874 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4a133994-7b33-4db4-a923-5b90d51e47b9-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.480912 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgn8m\" (UniqueName: \"kubernetes.io/projected/4a133994-7b33-4db4-a923-5b90d51e47b9-kube-api-access-pgn8m\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.480989 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4a133994-7b33-4db4-a923-5b90d51e47b9-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.481020 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.481049 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4a133994-7b33-4db4-a923-5b90d51e47b9-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.481078 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4a133994-7b33-4db4-a923-5b90d51e47b9-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.481118 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4a133994-7b33-4db4-a923-5b90d51e47b9-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.481311 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4a133994-7b33-4db4-a923-5b90d51e47b9-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.582303 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4a133994-7b33-4db4-a923-5b90d51e47b9-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.582379 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.582404 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4a133994-7b33-4db4-a923-5b90d51e47b9-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.582420 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4a133994-7b33-4db4-a923-5b90d51e47b9-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.582450 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4a133994-7b33-4db4-a923-5b90d51e47b9-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.582479 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4a133994-7b33-4db4-a923-5b90d51e47b9-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.582521 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4a133994-7b33-4db4-a923-5b90d51e47b9-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.582536 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4a133994-7b33-4db4-a923-5b90d51e47b9-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.582569 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4a133994-7b33-4db4-a923-5b90d51e47b9-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.582587 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4a133994-7b33-4db4-a923-5b90d51e47b9-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.582576 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.582605 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgn8m\" (UniqueName: \"kubernetes.io/projected/4a133994-7b33-4db4-a923-5b90d51e47b9-kube-api-access-pgn8m\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.583614 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4a133994-7b33-4db4-a923-5b90d51e47b9-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.583928 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4a133994-7b33-4db4-a923-5b90d51e47b9-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.584713 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4a133994-7b33-4db4-a923-5b90d51e47b9-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.584994 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4a133994-7b33-4db4-a923-5b90d51e47b9-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.585467 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4a133994-7b33-4db4-a923-5b90d51e47b9-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.587612 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4a133994-7b33-4db4-a923-5b90d51e47b9-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.587740 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4a133994-7b33-4db4-a923-5b90d51e47b9-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.587903 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4a133994-7b33-4db4-a923-5b90d51e47b9-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.590626 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4a133994-7b33-4db4-a923-5b90d51e47b9-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.602106 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgn8m\" (UniqueName: \"kubernetes.io/projected/4a133994-7b33-4db4-a923-5b90d51e47b9-kube-api-access-pgn8m\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.607814 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"4a133994-7b33-4db4-a923-5b90d51e47b9\") " pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:41 crc kubenswrapper[4932]: I0218 19:51:41.757519 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:51:42 crc kubenswrapper[4932]: I0218 19:51:42.790760 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 18 19:51:42 crc kubenswrapper[4932]: I0218 19:51:42.793774 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 18 19:51:42 crc kubenswrapper[4932]: I0218 19:51:42.796511 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 18 19:51:42 crc kubenswrapper[4932]: I0218 19:51:42.796594 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-wgrhb" Feb 18 19:51:42 crc kubenswrapper[4932]: I0218 19:51:42.796879 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 18 19:51:42 crc kubenswrapper[4932]: I0218 19:51:42.810926 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 18 19:51:42 crc kubenswrapper[4932]: I0218 19:51:42.811665 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 18 19:51:42 crc kubenswrapper[4932]: I0218 19:51:42.811850 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 18 19:51:42 crc kubenswrapper[4932]: I0218 19:51:42.908314 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:42 crc kubenswrapper[4932]: I0218 19:51:42.908383 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/915c727d-cb48-4649-bd71-30a5edf798d5-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:42 crc kubenswrapper[4932]: I0218 19:51:42.908406 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/915c727d-cb48-4649-bd71-30a5edf798d5-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:42 crc kubenswrapper[4932]: I0218 19:51:42.908564 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/915c727d-cb48-4649-bd71-30a5edf798d5-config-data-default\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:42 crc kubenswrapper[4932]: I0218 19:51:42.908721 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/915c727d-cb48-4649-bd71-30a5edf798d5-kolla-config\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:42 crc kubenswrapper[4932]: I0218 19:51:42.908764 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/915c727d-cb48-4649-bd71-30a5edf798d5-config-data-generated\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:42 crc kubenswrapper[4932]: I0218 19:51:42.908817 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/915c727d-cb48-4649-bd71-30a5edf798d5-operator-scripts\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:42 crc kubenswrapper[4932]: I0218 19:51:42.908923 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsxhw\" (UniqueName: \"kubernetes.io/projected/915c727d-cb48-4649-bd71-30a5edf798d5-kube-api-access-qsxhw\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.009828 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/915c727d-cb48-4649-bd71-30a5edf798d5-kolla-config\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.009873 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/915c727d-cb48-4649-bd71-30a5edf798d5-config-data-generated\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.009899 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/915c727d-cb48-4649-bd71-30a5edf798d5-operator-scripts\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.009951 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsxhw\" (UniqueName: \"kubernetes.io/projected/915c727d-cb48-4649-bd71-30a5edf798d5-kube-api-access-qsxhw\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.009989 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.010005 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/915c727d-cb48-4649-bd71-30a5edf798d5-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.010023 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/915c727d-cb48-4649-bd71-30a5edf798d5-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.010040 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/915c727d-cb48-4649-bd71-30a5edf798d5-config-data-default\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.010798 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.011232 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/915c727d-cb48-4649-bd71-30a5edf798d5-kolla-config\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.011494 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/915c727d-cb48-4649-bd71-30a5edf798d5-config-data-default\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.011640 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/915c727d-cb48-4649-bd71-30a5edf798d5-config-data-generated\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.012319 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/915c727d-cb48-4649-bd71-30a5edf798d5-operator-scripts\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.018212 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/915c727d-cb48-4649-bd71-30a5edf798d5-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.023744 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/915c727d-cb48-4649-bd71-30a5edf798d5-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.030621 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsxhw\" (UniqueName: \"kubernetes.io/projected/915c727d-cb48-4649-bd71-30a5edf798d5-kube-api-access-qsxhw\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.041369 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"915c727d-cb48-4649-bd71-30a5edf798d5\") " pod="openstack/openstack-galera-0" Feb 18 19:51:43 crc kubenswrapper[4932]: I0218 19:51:43.133033 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.176461 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.177663 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.183238 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-6r68t" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.185230 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.185413 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.187991 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.210594 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.227923 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9dd7155-a814-4ae0-92b9-6e71461473d5-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.227985 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d9dd7155-a814-4ae0-92b9-6e71461473d5-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.228012 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9dd7155-a814-4ae0-92b9-6e71461473d5-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.228057 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9dd7155-a814-4ae0-92b9-6e71461473d5-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.228122 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d9dd7155-a814-4ae0-92b9-6e71461473d5-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.228151 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpqpw\" (UniqueName: \"kubernetes.io/projected/d9dd7155-a814-4ae0-92b9-6e71461473d5-kube-api-access-dpqpw\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.228204 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d9dd7155-a814-4ae0-92b9-6e71461473d5-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.228271 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.329657 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9dd7155-a814-4ae0-92b9-6e71461473d5-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.329705 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d9dd7155-a814-4ae0-92b9-6e71461473d5-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.329723 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9dd7155-a814-4ae0-92b9-6e71461473d5-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.329750 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9dd7155-a814-4ae0-92b9-6e71461473d5-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.329782 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d9dd7155-a814-4ae0-92b9-6e71461473d5-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.329796 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpqpw\" (UniqueName: \"kubernetes.io/projected/d9dd7155-a814-4ae0-92b9-6e71461473d5-kube-api-access-dpqpw\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.329823 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d9dd7155-a814-4ae0-92b9-6e71461473d5-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.329868 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.330140 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.333032 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d9dd7155-a814-4ae0-92b9-6e71461473d5-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.333960 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d9dd7155-a814-4ae0-92b9-6e71461473d5-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.335380 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d9dd7155-a814-4ae0-92b9-6e71461473d5-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.335398 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9dd7155-a814-4ae0-92b9-6e71461473d5-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.335760 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9dd7155-a814-4ae0-92b9-6e71461473d5-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.341692 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9dd7155-a814-4ae0-92b9-6e71461473d5-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.351033 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.354612 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpqpw\" (UniqueName: \"kubernetes.io/projected/d9dd7155-a814-4ae0-92b9-6e71461473d5-kube-api-access-dpqpw\") pod \"openstack-cell1-galera-0\" (UID: \"d9dd7155-a814-4ae0-92b9-6e71461473d5\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.357144 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.358319 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.360413 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.360738 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.360930 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-9hv7p" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.381078 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.431004 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhjxd\" (UniqueName: \"kubernetes.io/projected/fd0a010e-64af-4552-8098-747bf5644c3c-kube-api-access-mhjxd\") pod \"memcached-0\" (UID: \"fd0a010e-64af-4552-8098-747bf5644c3c\") " pod="openstack/memcached-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.431080 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fd0a010e-64af-4552-8098-747bf5644c3c-kolla-config\") pod \"memcached-0\" (UID: \"fd0a010e-64af-4552-8098-747bf5644c3c\") " pod="openstack/memcached-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.431142 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fd0a010e-64af-4552-8098-747bf5644c3c-config-data\") pod \"memcached-0\" (UID: \"fd0a010e-64af-4552-8098-747bf5644c3c\") " pod="openstack/memcached-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.431223 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd0a010e-64af-4552-8098-747bf5644c3c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"fd0a010e-64af-4552-8098-747bf5644c3c\") " pod="openstack/memcached-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.431249 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd0a010e-64af-4552-8098-747bf5644c3c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"fd0a010e-64af-4552-8098-747bf5644c3c\") " pod="openstack/memcached-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.502658 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.532745 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd0a010e-64af-4552-8098-747bf5644c3c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"fd0a010e-64af-4552-8098-747bf5644c3c\") " pod="openstack/memcached-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.532797 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd0a010e-64af-4552-8098-747bf5644c3c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"fd0a010e-64af-4552-8098-747bf5644c3c\") " pod="openstack/memcached-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.532879 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhjxd\" (UniqueName: \"kubernetes.io/projected/fd0a010e-64af-4552-8098-747bf5644c3c-kube-api-access-mhjxd\") pod \"memcached-0\" (UID: \"fd0a010e-64af-4552-8098-747bf5644c3c\") " pod="openstack/memcached-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.532911 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fd0a010e-64af-4552-8098-747bf5644c3c-kolla-config\") pod \"memcached-0\" (UID: \"fd0a010e-64af-4552-8098-747bf5644c3c\") " pod="openstack/memcached-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.532955 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fd0a010e-64af-4552-8098-747bf5644c3c-config-data\") pod \"memcached-0\" (UID: \"fd0a010e-64af-4552-8098-747bf5644c3c\") " pod="openstack/memcached-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.533664 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fd0a010e-64af-4552-8098-747bf5644c3c-config-data\") pod \"memcached-0\" (UID: \"fd0a010e-64af-4552-8098-747bf5644c3c\") " pod="openstack/memcached-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.533801 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fd0a010e-64af-4552-8098-747bf5644c3c-kolla-config\") pod \"memcached-0\" (UID: \"fd0a010e-64af-4552-8098-747bf5644c3c\") " pod="openstack/memcached-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.536296 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd0a010e-64af-4552-8098-747bf5644c3c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"fd0a010e-64af-4552-8098-747bf5644c3c\") " pod="openstack/memcached-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.538042 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd0a010e-64af-4552-8098-747bf5644c3c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"fd0a010e-64af-4552-8098-747bf5644c3c\") " pod="openstack/memcached-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.552106 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhjxd\" (UniqueName: \"kubernetes.io/projected/fd0a010e-64af-4552-8098-747bf5644c3c-kube-api-access-mhjxd\") pod \"memcached-0\" (UID: \"fd0a010e-64af-4552-8098-747bf5644c3c\") " pod="openstack/memcached-0" Feb 18 19:51:44 crc kubenswrapper[4932]: I0218 19:51:44.722512 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 18 19:51:46 crc kubenswrapper[4932]: I0218 19:51:46.866890 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:51:46 crc kubenswrapper[4932]: I0218 19:51:46.867806 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:51:46 crc kubenswrapper[4932]: I0218 19:51:46.869383 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-dtbf6" Feb 18 19:51:46 crc kubenswrapper[4932]: I0218 19:51:46.880127 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:51:46 crc kubenswrapper[4932]: I0218 19:51:46.970381 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlzzn\" (UniqueName: \"kubernetes.io/projected/bf2c7a4b-b600-48af-8081-cbb3c729223f-kube-api-access-hlzzn\") pod \"kube-state-metrics-0\" (UID: \"bf2c7a4b-b600-48af-8081-cbb3c729223f\") " pod="openstack/kube-state-metrics-0" Feb 18 19:51:47 crc kubenswrapper[4932]: I0218 19:51:47.073125 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlzzn\" (UniqueName: \"kubernetes.io/projected/bf2c7a4b-b600-48af-8081-cbb3c729223f-kube-api-access-hlzzn\") pod \"kube-state-metrics-0\" (UID: \"bf2c7a4b-b600-48af-8081-cbb3c729223f\") " pod="openstack/kube-state-metrics-0" Feb 18 19:51:47 crc kubenswrapper[4932]: I0218 19:51:47.091206 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlzzn\" (UniqueName: \"kubernetes.io/projected/bf2c7a4b-b600-48af-8081-cbb3c729223f-kube-api-access-hlzzn\") pod \"kube-state-metrics-0\" (UID: \"bf2c7a4b-b600-48af-8081-cbb3c729223f\") " pod="openstack/kube-state-metrics-0" Feb 18 19:51:47 crc kubenswrapper[4932]: I0218 19:51:47.186108 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.172897 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.175124 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.179524 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.179543 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.181772 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.182312 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.182445 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-5jcnf" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.183579 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.188260 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.190669 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.190720 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.190746 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.190770 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-config\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.190961 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.191001 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.191035 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnvgq\" (UniqueName: \"kubernetes.io/projected/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-kube-api-access-cnvgq\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.191069 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.191086 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.191483 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.197942 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.207258 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.294302 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.294379 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.294868 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.294900 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-config\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.294924 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.294954 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.294981 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnvgq\" (UniqueName: \"kubernetes.io/projected/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-kube-api-access-cnvgq\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.295013 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.295030 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.295076 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.296474 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.296572 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.296682 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.297799 4932 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.297842 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e039419306e79ade7652e80c67474011a5658585fd3b39d0b236ffa94ab5d0db/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.299563 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-config\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.299581 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.299724 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.300690 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.307420 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.318694 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnvgq\" (UniqueName: \"kubernetes.io/projected/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-kube-api-access-cnvgq\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.338869 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") pod \"prometheus-metric-storage-0\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:48 crc kubenswrapper[4932]: I0218 19:51:48.495161 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.714666 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-99qbh"] Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.716464 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.721353 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-4q9hb" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.721942 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.726064 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.738225 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-lvg9q"] Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.740709 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.755278 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-99qbh"] Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.787672 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-lvg9q"] Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.825125 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/039d44bb-1ad0-4916-8ef2-3cece4829506-ovn-controller-tls-certs\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.825226 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr894\" (UniqueName: \"kubernetes.io/projected/039d44bb-1ad0-4916-8ef2-3cece4829506-kube-api-access-vr894\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.825360 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/039d44bb-1ad0-4916-8ef2-3cece4829506-combined-ca-bundle\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.825411 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/039d44bb-1ad0-4916-8ef2-3cece4829506-var-run-ovn\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.825547 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/039d44bb-1ad0-4916-8ef2-3cece4829506-var-run\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.825722 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/039d44bb-1ad0-4916-8ef2-3cece4829506-var-log-ovn\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.825780 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/039d44bb-1ad0-4916-8ef2-3cece4829506-scripts\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.927967 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ca19a8de-2aaf-459e-bfcd-d73a819558b0-var-run\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.928055 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ca19a8de-2aaf-459e-bfcd-d73a819558b0-var-lib\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.928229 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ca19a8de-2aaf-459e-bfcd-d73a819558b0-etc-ovs\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.928404 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ca19a8de-2aaf-459e-bfcd-d73a819558b0-scripts\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.928468 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/039d44bb-1ad0-4916-8ef2-3cece4829506-ovn-controller-tls-certs\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.928572 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r6nt\" (UniqueName: \"kubernetes.io/projected/ca19a8de-2aaf-459e-bfcd-d73a819558b0-kube-api-access-9r6nt\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.928626 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr894\" (UniqueName: \"kubernetes.io/projected/039d44bb-1ad0-4916-8ef2-3cece4829506-kube-api-access-vr894\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.928676 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/039d44bb-1ad0-4916-8ef2-3cece4829506-combined-ca-bundle\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.928715 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/039d44bb-1ad0-4916-8ef2-3cece4829506-var-run-ovn\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.928796 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/039d44bb-1ad0-4916-8ef2-3cece4829506-var-run\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.928876 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/039d44bb-1ad0-4916-8ef2-3cece4829506-var-log-ovn\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.928917 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ca19a8de-2aaf-459e-bfcd-d73a819558b0-var-log\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.928974 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/039d44bb-1ad0-4916-8ef2-3cece4829506-scripts\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.930482 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/039d44bb-1ad0-4916-8ef2-3cece4829506-var-run\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.930548 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/039d44bb-1ad0-4916-8ef2-3cece4829506-var-run-ovn\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.930780 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/039d44bb-1ad0-4916-8ef2-3cece4829506-var-log-ovn\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.934470 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/039d44bb-1ad0-4916-8ef2-3cece4829506-scripts\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.936338 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/039d44bb-1ad0-4916-8ef2-3cece4829506-ovn-controller-tls-certs\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.936930 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/039d44bb-1ad0-4916-8ef2-3cece4829506-combined-ca-bundle\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:49 crc kubenswrapper[4932]: I0218 19:51:49.949151 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr894\" (UniqueName: \"kubernetes.io/projected/039d44bb-1ad0-4916-8ef2-3cece4829506-kube-api-access-vr894\") pod \"ovn-controller-99qbh\" (UID: \"039d44bb-1ad0-4916-8ef2-3cece4829506\") " pod="openstack/ovn-controller-99qbh" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.029848 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r6nt\" (UniqueName: \"kubernetes.io/projected/ca19a8de-2aaf-459e-bfcd-d73a819558b0-kube-api-access-9r6nt\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.030131 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ca19a8de-2aaf-459e-bfcd-d73a819558b0-var-log\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.030162 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ca19a8de-2aaf-459e-bfcd-d73a819558b0-var-run\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.030223 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ca19a8de-2aaf-459e-bfcd-d73a819558b0-var-lib\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.030244 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ca19a8de-2aaf-459e-bfcd-d73a819558b0-etc-ovs\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.030258 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ca19a8de-2aaf-459e-bfcd-d73a819558b0-scripts\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.030532 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ca19a8de-2aaf-459e-bfcd-d73a819558b0-var-run\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.031008 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ca19a8de-2aaf-459e-bfcd-d73a819558b0-var-log\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.031362 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ca19a8de-2aaf-459e-bfcd-d73a819558b0-var-lib\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.031670 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ca19a8de-2aaf-459e-bfcd-d73a819558b0-etc-ovs\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.032446 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ca19a8de-2aaf-459e-bfcd-d73a819558b0-scripts\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.034785 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-99qbh" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.046733 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.049197 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r6nt\" (UniqueName: \"kubernetes.io/projected/ca19a8de-2aaf-459e-bfcd-d73a819558b0-kube-api-access-9r6nt\") pod \"ovn-controller-ovs-lvg9q\" (UID: \"ca19a8de-2aaf-459e-bfcd-d73a819558b0\") " pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.063462 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.070246 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.070248 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.070424 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.070620 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.070717 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-wqcp9" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.070736 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.091614 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.233734 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.233792 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09fcfda8-434e-4759-81cc-47304cbbe9d3-config\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.233833 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z558f\" (UniqueName: \"kubernetes.io/projected/09fcfda8-434e-4759-81cc-47304cbbe9d3-kube-api-access-z558f\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.233857 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09fcfda8-434e-4759-81cc-47304cbbe9d3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.234387 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/09fcfda8-434e-4759-81cc-47304cbbe9d3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.234553 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09fcfda8-434e-4759-81cc-47304cbbe9d3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.234618 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/09fcfda8-434e-4759-81cc-47304cbbe9d3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.234770 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/09fcfda8-434e-4759-81cc-47304cbbe9d3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.335933 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/09fcfda8-434e-4759-81cc-47304cbbe9d3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.336006 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.336025 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09fcfda8-434e-4759-81cc-47304cbbe9d3-config\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.336091 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z558f\" (UniqueName: \"kubernetes.io/projected/09fcfda8-434e-4759-81cc-47304cbbe9d3-kube-api-access-z558f\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.336482 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/09fcfda8-434e-4759-81cc-47304cbbe9d3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.336887 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09fcfda8-434e-4759-81cc-47304cbbe9d3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.336943 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/09fcfda8-434e-4759-81cc-47304cbbe9d3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.336967 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09fcfda8-434e-4759-81cc-47304cbbe9d3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.336986 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/09fcfda8-434e-4759-81cc-47304cbbe9d3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.337294 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09fcfda8-434e-4759-81cc-47304cbbe9d3-config\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.337917 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.338142 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09fcfda8-434e-4759-81cc-47304cbbe9d3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.347608 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/09fcfda8-434e-4759-81cc-47304cbbe9d3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.358320 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/09fcfda8-434e-4759-81cc-47304cbbe9d3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.358876 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09fcfda8-434e-4759-81cc-47304cbbe9d3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.378243 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z558f\" (UniqueName: \"kubernetes.io/projected/09fcfda8-434e-4759-81cc-47304cbbe9d3-kube-api-access-z558f\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.383854 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"09fcfda8-434e-4759-81cc-47304cbbe9d3\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:50 crc kubenswrapper[4932]: I0218 19:51:50.393847 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.396105 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.397862 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.402796 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.403030 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.403613 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.403695 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-mtm2x" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.410282 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.504806 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.504886 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9432af7-4713-4805-b822-efcb8b1fb21d-config\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.504923 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f9432af7-4713-4805-b822-efcb8b1fb21d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.504954 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phcvk\" (UniqueName: \"kubernetes.io/projected/f9432af7-4713-4805-b822-efcb8b1fb21d-kube-api-access-phcvk\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.504978 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9432af7-4713-4805-b822-efcb8b1fb21d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.504999 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9432af7-4713-4805-b822-efcb8b1fb21d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.505018 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9432af7-4713-4805-b822-efcb8b1fb21d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.505037 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9432af7-4713-4805-b822-efcb8b1fb21d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.606370 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9432af7-4713-4805-b822-efcb8b1fb21d-config\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.606430 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f9432af7-4713-4805-b822-efcb8b1fb21d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.606492 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phcvk\" (UniqueName: \"kubernetes.io/projected/f9432af7-4713-4805-b822-efcb8b1fb21d-kube-api-access-phcvk\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.606516 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9432af7-4713-4805-b822-efcb8b1fb21d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.606537 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9432af7-4713-4805-b822-efcb8b1fb21d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.606559 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9432af7-4713-4805-b822-efcb8b1fb21d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.606579 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9432af7-4713-4805-b822-efcb8b1fb21d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.606621 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.606968 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.607960 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9432af7-4713-4805-b822-efcb8b1fb21d-config\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.608386 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f9432af7-4713-4805-b822-efcb8b1fb21d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.609306 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9432af7-4713-4805-b822-efcb8b1fb21d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.614233 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9432af7-4713-4805-b822-efcb8b1fb21d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.615725 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9432af7-4713-4805-b822-efcb8b1fb21d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.616693 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9432af7-4713-4805-b822-efcb8b1fb21d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.630655 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phcvk\" (UniqueName: \"kubernetes.io/projected/f9432af7-4713-4805-b822-efcb8b1fb21d-kube-api-access-phcvk\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.638217 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f9432af7-4713-4805-b822-efcb8b1fb21d\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:54 crc kubenswrapper[4932]: I0218 19:51:54.729788 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 18 19:51:55 crc kubenswrapper[4932]: E0218 19:51:55.754068 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.58:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 18 19:51:55 crc kubenswrapper[4932]: E0218 19:51:55.754463 4932 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.58:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 18 19:51:55 crc kubenswrapper[4932]: E0218 19:51:55.754767 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.58:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b4k8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5d46db5bb7-js9zs_openstack(fccb0fa8-b88d-469c-b88e-838aa9f5d481): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:51:55 crc kubenswrapper[4932]: E0218 19:51:55.756061 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5d46db5bb7-js9zs" podUID="fccb0fa8-b88d-469c-b88e-838aa9f5d481" Feb 18 19:51:55 crc kubenswrapper[4932]: E0218 19:51:55.785655 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.58:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 18 19:51:55 crc kubenswrapper[4932]: E0218 19:51:55.785716 4932 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.58:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 18 19:51:55 crc kubenswrapper[4932]: E0218 19:51:55.785821 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.58:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jxps7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-59c78cff8f-mnmbx_openstack(9bac8c90-8ad0-4e01-8434-92f4bc659e1d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:51:55 crc kubenswrapper[4932]: E0218 19:51:55.787009 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-59c78cff8f-mnmbx" podUID="9bac8c90-8ad0-4e01-8434-92f4bc659e1d" Feb 18 19:51:56 crc kubenswrapper[4932]: I0218 19:51:56.836320 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:51:56 crc kubenswrapper[4932]: I0218 19:51:56.843593 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Feb 18 19:51:56 crc kubenswrapper[4932]: I0218 19:51:56.852579 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57dc99974f-qvkx9"] Feb 18 19:51:56 crc kubenswrapper[4932]: I0218 19:51:56.879194 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 19:51:56 crc kubenswrapper[4932]: W0218 19:51:56.886127 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod915c727d_cb48_4649_bd71_30a5edf798d5.slice/crio-f6b8ae4f866fc5f739dc311d0f879722297bf27b82ef79f6e5b40fcd9f3981b9 WatchSource:0}: Error finding container f6b8ae4f866fc5f739dc311d0f879722297bf27b82ef79f6e5b40fcd9f3981b9: Status 404 returned error can't find the container with id f6b8ae4f866fc5f739dc311d0f879722297bf27b82ef79f6e5b40fcd9f3981b9 Feb 18 19:51:56 crc kubenswrapper[4932]: I0218 19:51:56.887165 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 18 19:51:56 crc kubenswrapper[4932]: I0218 19:51:56.928355 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c78cff8f-mnmbx" Feb 18 19:51:56 crc kubenswrapper[4932]: I0218 19:51:56.934485 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d46db5bb7-js9zs" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.052758 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4k8g\" (UniqueName: \"kubernetes.io/projected/fccb0fa8-b88d-469c-b88e-838aa9f5d481-kube-api-access-b4k8g\") pod \"fccb0fa8-b88d-469c-b88e-838aa9f5d481\" (UID: \"fccb0fa8-b88d-469c-b88e-838aa9f5d481\") " Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.052855 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxps7\" (UniqueName: \"kubernetes.io/projected/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-kube-api-access-jxps7\") pod \"9bac8c90-8ad0-4e01-8434-92f4bc659e1d\" (UID: \"9bac8c90-8ad0-4e01-8434-92f4bc659e1d\") " Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.052916 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fccb0fa8-b88d-469c-b88e-838aa9f5d481-config\") pod \"fccb0fa8-b88d-469c-b88e-838aa9f5d481\" (UID: \"fccb0fa8-b88d-469c-b88e-838aa9f5d481\") " Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.053623 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fccb0fa8-b88d-469c-b88e-838aa9f5d481-config" (OuterVolumeSpecName: "config") pod "fccb0fa8-b88d-469c-b88e-838aa9f5d481" (UID: "fccb0fa8-b88d-469c-b88e-838aa9f5d481"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.053689 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-dns-svc\") pod \"9bac8c90-8ad0-4e01-8434-92f4bc659e1d\" (UID: \"9bac8c90-8ad0-4e01-8434-92f4bc659e1d\") " Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.053719 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9bac8c90-8ad0-4e01-8434-92f4bc659e1d" (UID: "9bac8c90-8ad0-4e01-8434-92f4bc659e1d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.053766 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-config\") pod \"9bac8c90-8ad0-4e01-8434-92f4bc659e1d\" (UID: \"9bac8c90-8ad0-4e01-8434-92f4bc659e1d\") " Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.054410 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-config" (OuterVolumeSpecName: "config") pod "9bac8c90-8ad0-4e01-8434-92f4bc659e1d" (UID: "9bac8c90-8ad0-4e01-8434-92f4bc659e1d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.054687 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.054705 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fccb0fa8-b88d-469c-b88e-838aa9f5d481-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.054716 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.088028 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-kube-api-access-jxps7" (OuterVolumeSpecName: "kube-api-access-jxps7") pod "9bac8c90-8ad0-4e01-8434-92f4bc659e1d" (UID: "9bac8c90-8ad0-4e01-8434-92f4bc659e1d"). InnerVolumeSpecName "kube-api-access-jxps7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.088998 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fccb0fa8-b88d-469c-b88e-838aa9f5d481-kube-api-access-b4k8g" (OuterVolumeSpecName: "kube-api-access-b4k8g") pod "fccb0fa8-b88d-469c-b88e-838aa9f5d481" (UID: "fccb0fa8-b88d-469c-b88e-838aa9f5d481"). InnerVolumeSpecName "kube-api-access-b4k8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.159229 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxps7\" (UniqueName: \"kubernetes.io/projected/9bac8c90-8ad0-4e01-8434-92f4bc659e1d-kube-api-access-jxps7\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.159538 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4k8g\" (UniqueName: \"kubernetes.io/projected/fccb0fa8-b88d-469c-b88e-838aa9f5d481-kube-api-access-b4k8g\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.294021 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d9dd7155-a814-4ae0-92b9-6e71461473d5","Type":"ContainerStarted","Data":"f8af9221181a397252b3223e984d42961b745e96f126e899f25a0d278531e844"} Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.295865 4932 generic.go:334] "Generic (PLEG): container finished" podID="ab397921-9519-48e8-a5c0-5c388d54b6cd" containerID="7519d416745f73fbe57ab60a5e4c59b970e555510b0fcbd5d3f5ac822320b937" exitCode=0 Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.295965 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57dc99974f-qvkx9" event={"ID":"ab397921-9519-48e8-a5c0-5c388d54b6cd","Type":"ContainerDied","Data":"7519d416745f73fbe57ab60a5e4c59b970e555510b0fcbd5d3f5ac822320b937"} Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.296022 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57dc99974f-qvkx9" event={"ID":"ab397921-9519-48e8-a5c0-5c388d54b6cd","Type":"ContainerStarted","Data":"fb40657aa3dcb246fdc0a993fa98fd739b898555df945a722c59ed513a24340b"} Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.299062 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"915c727d-cb48-4649-bd71-30a5edf798d5","Type":"ContainerStarted","Data":"f6b8ae4f866fc5f739dc311d0f879722297bf27b82ef79f6e5b40fcd9f3981b9"} Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.300294 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"4a133994-7b33-4db4-a923-5b90d51e47b9","Type":"ContainerStarted","Data":"8d6ecdb333ce753d501f234162033571ca4cf78773d9a86903cada2e21a8d576"} Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.301712 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d46db5bb7-js9zs" event={"ID":"fccb0fa8-b88d-469c-b88e-838aa9f5d481","Type":"ContainerDied","Data":"46532bab9d5422ae97530391ba7e12cbc323bc5e7eec881c2be6645f3ff80478"} Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.301779 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d46db5bb7-js9zs" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.304339 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cd547864-4d03-45ae-8bb1-10a360d36599","Type":"ContainerStarted","Data":"df1d9be37e083e5a4584427f91148d70b49af32f754e3fd54a2d761cb7b0f9e2"} Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.306969 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c78cff8f-mnmbx" event={"ID":"9bac8c90-8ad0-4e01-8434-92f4bc659e1d","Type":"ContainerDied","Data":"2d0b3b915f083e7565ab24eb11a22c59b965e4e0d82849dbc4f9e1e4e3b64b15"} Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.307057 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c78cff8f-mnmbx" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.340702 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b9746b6c-vpbf8"] Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.348452 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-668d7c8657-fkpfr"] Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.356809 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.394909 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d46db5bb7-js9zs"] Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.424228 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d46db5bb7-js9zs"] Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.561584 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.566865 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 18 19:51:57 crc kubenswrapper[4932]: W0218 19:51:57.578008 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9432af7_4713_4805_b822_efcb8b1fb21d.slice/crio-30082970363e00b57a3f0b32e4858a7ca799127c28559e709c0b302d7d54967e WatchSource:0}: Error finding container 30082970363e00b57a3f0b32e4858a7ca799127c28559e709c0b302d7d54967e: Status 404 returned error can't find the container with id 30082970363e00b57a3f0b32e4858a7ca799127c28559e709c0b302d7d54967e Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.606893 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.606949 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.607627 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59c78cff8f-mnmbx"] Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.644687 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.670106 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-99qbh"] Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.691077 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59c78cff8f-mnmbx"] Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.696542 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.701231 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.744093 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57dc99974f-qvkx9" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.896431 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab397921-9519-48e8-a5c0-5c388d54b6cd-config\") pod \"ab397921-9519-48e8-a5c0-5c388d54b6cd\" (UID: \"ab397921-9519-48e8-a5c0-5c388d54b6cd\") " Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.896611 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j8m6\" (UniqueName: \"kubernetes.io/projected/ab397921-9519-48e8-a5c0-5c388d54b6cd-kube-api-access-5j8m6\") pod \"ab397921-9519-48e8-a5c0-5c388d54b6cd\" (UID: \"ab397921-9519-48e8-a5c0-5c388d54b6cd\") " Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.896701 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab397921-9519-48e8-a5c0-5c388d54b6cd-dns-svc\") pod \"ab397921-9519-48e8-a5c0-5c388d54b6cd\" (UID: \"ab397921-9519-48e8-a5c0-5c388d54b6cd\") " Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.901238 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab397921-9519-48e8-a5c0-5c388d54b6cd-kube-api-access-5j8m6" (OuterVolumeSpecName: "kube-api-access-5j8m6") pod "ab397921-9519-48e8-a5c0-5c388d54b6cd" (UID: "ab397921-9519-48e8-a5c0-5c388d54b6cd"). InnerVolumeSpecName "kube-api-access-5j8m6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.913950 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab397921-9519-48e8-a5c0-5c388d54b6cd-config" (OuterVolumeSpecName: "config") pod "ab397921-9519-48e8-a5c0-5c388d54b6cd" (UID: "ab397921-9519-48e8-a5c0-5c388d54b6cd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.930624 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab397921-9519-48e8-a5c0-5c388d54b6cd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ab397921-9519-48e8-a5c0-5c388d54b6cd" (UID: "ab397921-9519-48e8-a5c0-5c388d54b6cd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.998676 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j8m6\" (UniqueName: \"kubernetes.io/projected/ab397921-9519-48e8-a5c0-5c388d54b6cd-kube-api-access-5j8m6\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.998743 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab397921-9519-48e8-a5c0-5c388d54b6cd-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:57 crc kubenswrapper[4932]: I0218 19:51:57.998757 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab397921-9519-48e8-a5c0-5c388d54b6cd-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.320493 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-lvg9q"] Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.321969 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-99qbh" event={"ID":"039d44bb-1ad0-4916-8ef2-3cece4829506","Type":"ContainerStarted","Data":"bd406f9c4241cef51213ac7d66da73fb59f360c39b3da8dfddb80bee7a503913"} Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.323629 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"09fcfda8-434e-4759-81cc-47304cbbe9d3","Type":"ContainerStarted","Data":"15d43872fee12464bfed6c60d36e086eb12f618ed87aa39d9d79903e12aed140"} Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.325666 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bf2c7a4b-b600-48af-8081-cbb3c729223f","Type":"ContainerStarted","Data":"e47d3e77ce83e6731fdca0338e3764007d631b786a20a291b2d3ac30da1a2204"} Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.327773 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57dc99974f-qvkx9" event={"ID":"ab397921-9519-48e8-a5c0-5c388d54b6cd","Type":"ContainerDied","Data":"fb40657aa3dcb246fdc0a993fa98fd739b898555df945a722c59ed513a24340b"} Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.327803 4932 scope.go:117] "RemoveContainer" containerID="7519d416745f73fbe57ab60a5e4c59b970e555510b0fcbd5d3f5ac822320b937" Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.327810 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57dc99974f-qvkx9" Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.329474 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c","Type":"ContainerStarted","Data":"080ccaf3edee131274523286f1e1cdf3b8aebb0e277f6e516ffc7e73a0cc72c7"} Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.331526 4932 generic.go:334] "Generic (PLEG): container finished" podID="ca226f67-28b6-4585-a6ed-7d4394cc2a15" containerID="fb77fa86b478ace9e7c0af1b277313a6540008c5d9f1ff5d57280eec9e6eda68" exitCode=0 Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.331569 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" event={"ID":"ca226f67-28b6-4585-a6ed-7d4394cc2a15","Type":"ContainerDied","Data":"fb77fa86b478ace9e7c0af1b277313a6540008c5d9f1ff5d57280eec9e6eda68"} Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.331584 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" event={"ID":"ca226f67-28b6-4585-a6ed-7d4394cc2a15","Type":"ContainerStarted","Data":"43daca4777cee280f31b3e73b817f441991f4957de8e06f5e125fa3c6e27e74a"} Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.332696 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cf98dd42-289f-43fa-b4dc-c6ff814a3c25","Type":"ContainerStarted","Data":"c079ef0a75a184583fc3bcc63484ddbcd7e9466dbb03675318140b785c3f7c07"} Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.333947 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f9432af7-4713-4805-b822-efcb8b1fb21d","Type":"ContainerStarted","Data":"30082970363e00b57a3f0b32e4858a7ca799127c28559e709c0b302d7d54967e"} Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.336487 4932 generic.go:334] "Generic (PLEG): container finished" podID="7182e8ba-c70f-44ce-b628-21107829cb83" containerID="21d90ef666981de2a2798c5a9811496799691c81e9c63553c393f18b1c049e7d" exitCode=0 Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.336602 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" event={"ID":"7182e8ba-c70f-44ce-b628-21107829cb83","Type":"ContainerDied","Data":"21d90ef666981de2a2798c5a9811496799691c81e9c63553c393f18b1c049e7d"} Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.336624 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" event={"ID":"7182e8ba-c70f-44ce-b628-21107829cb83","Type":"ContainerStarted","Data":"4397e01bf815b57b3796c200c92c0f185a71fad8907ccac4ea649586543a8255"} Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.339137 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"fd0a010e-64af-4552-8098-747bf5644c3c","Type":"ContainerStarted","Data":"ccbbd8ff9c2d845a9b9d448c89fd5bd234f92bdd9b5820d334f83c200c320aeb"} Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.431369 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57dc99974f-qvkx9"] Feb 18 19:51:58 crc kubenswrapper[4932]: I0218 19:51:58.437344 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57dc99974f-qvkx9"] Feb 18 19:51:59 crc kubenswrapper[4932]: I0218 19:51:59.192487 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bac8c90-8ad0-4e01-8434-92f4bc659e1d" path="/var/lib/kubelet/pods/9bac8c90-8ad0-4e01-8434-92f4bc659e1d/volumes" Feb 18 19:51:59 crc kubenswrapper[4932]: I0218 19:51:59.193119 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab397921-9519-48e8-a5c0-5c388d54b6cd" path="/var/lib/kubelet/pods/ab397921-9519-48e8-a5c0-5c388d54b6cd/volumes" Feb 18 19:51:59 crc kubenswrapper[4932]: I0218 19:51:59.193649 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fccb0fa8-b88d-469c-b88e-838aa9f5d481" path="/var/lib/kubelet/pods/fccb0fa8-b88d-469c-b88e-838aa9f5d481/volumes" Feb 18 19:51:59 crc kubenswrapper[4932]: I0218 19:51:59.351879 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lvg9q" event={"ID":"ca19a8de-2aaf-459e-bfcd-d73a819558b0","Type":"ContainerStarted","Data":"1ec0c920417917a1897259352c1ea8c97c2c31eb493540b5918d1e88980afcef"} Feb 18 19:52:05 crc kubenswrapper[4932]: I0218 19:52:05.430148 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" event={"ID":"ca226f67-28b6-4585-a6ed-7d4394cc2a15","Type":"ContainerStarted","Data":"f78e93787c858fedfe99c1dd7d8972562c24e9bb3a02409a5c96dd964e9e2d1b"} Feb 18 19:52:05 crc kubenswrapper[4932]: I0218 19:52:05.430709 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" Feb 18 19:52:05 crc kubenswrapper[4932]: I0218 19:52:05.455439 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" podStartSLOduration=26.455415482 podStartE2EDuration="26.455415482s" podCreationTimestamp="2026-02-18 19:51:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:52:05.45047181 +0000 UTC m=+1089.032426655" watchObservedRunningTime="2026-02-18 19:52:05.455415482 +0000 UTC m=+1089.037370327" Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.439746 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bf2c7a4b-b600-48af-8081-cbb3c729223f","Type":"ContainerStarted","Data":"705b36739fa359b1a1790afbeb0506bcecaed1656227244dd9b3dc4748101648"} Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.440259 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.442918 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" event={"ID":"7182e8ba-c70f-44ce-b628-21107829cb83","Type":"ContainerStarted","Data":"0f7421be32fb94af35c77032e6ff053b63a9d0e9743ff1d49b4fdfb654ec47b1"} Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.443323 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.445325 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"fd0a010e-64af-4552-8098-747bf5644c3c","Type":"ContainerStarted","Data":"e06fac0e9c835022a5975f254d96d752a99a4fc07ffeaa18abdcddf486beeac5"} Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.445476 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.447444 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-99qbh" event={"ID":"039d44bb-1ad0-4916-8ef2-3cece4829506","Type":"ContainerStarted","Data":"9ae0c69c90fd66c6cded06757075ca9d6936468e1ea8ee5d08da3677fd8f054b"} Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.447563 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-99qbh" Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.451977 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"09fcfda8-434e-4759-81cc-47304cbbe9d3","Type":"ContainerStarted","Data":"2db0d0904c63721aa21a520d6b3ee4ba67d3afb109bbe34b6940fff392794d1b"} Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.455493 4932 generic.go:334] "Generic (PLEG): container finished" podID="ca19a8de-2aaf-459e-bfcd-d73a819558b0" containerID="d2fb04b0f491285b17c4c9db5b62180a469fec03213c24b5e7175dfbc5dc620e" exitCode=0 Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.455568 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lvg9q" event={"ID":"ca19a8de-2aaf-459e-bfcd-d73a819558b0","Type":"ContainerDied","Data":"d2fb04b0f491285b17c4c9db5b62180a469fec03213c24b5e7175dfbc5dc620e"} Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.459493 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"915c727d-cb48-4649-bd71-30a5edf798d5","Type":"ContainerStarted","Data":"ab4a1ef53fc46a73ae038e80a0b9f945d129feb61eaa7b9bb7cc45f2dc7ef05f"} Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.464257 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=12.60062522 podStartE2EDuration="20.464231243s" podCreationTimestamp="2026-02-18 19:51:46 +0000 UTC" firstStartedPulling="2026-02-18 19:51:57.490416571 +0000 UTC m=+1081.072371416" lastFinishedPulling="2026-02-18 19:52:05.354022594 +0000 UTC m=+1088.935977439" observedRunningTime="2026-02-18 19:52:06.452993066 +0000 UTC m=+1090.034947911" watchObservedRunningTime="2026-02-18 19:52:06.464231243 +0000 UTC m=+1090.046186118" Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.465680 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d9dd7155-a814-4ae0-92b9-6e71461473d5","Type":"ContainerStarted","Data":"6b00a32228a57fd318a20db1c61c54b5b976211c74483d59bcbd1b6e0cb1c8ff"} Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.469846 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f9432af7-4713-4805-b822-efcb8b1fb21d","Type":"ContainerStarted","Data":"983306ad45c7bd8aa27aa3c06c2ab4016ec1762a2f201370eef6e39173d64ffe"} Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.493691 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-99qbh" podStartSLOduration=10.59558404 podStartE2EDuration="17.493663528s" podCreationTimestamp="2026-02-18 19:51:49 +0000 UTC" firstStartedPulling="2026-02-18 19:51:57.486798692 +0000 UTC m=+1081.068753537" lastFinishedPulling="2026-02-18 19:52:04.38487818 +0000 UTC m=+1087.966833025" observedRunningTime="2026-02-18 19:52:06.483144019 +0000 UTC m=+1090.065098874" watchObservedRunningTime="2026-02-18 19:52:06.493663528 +0000 UTC m=+1090.075618383" Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.506442 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" podStartSLOduration=26.506423503 podStartE2EDuration="26.506423503s" podCreationTimestamp="2026-02-18 19:51:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:52:06.497540954 +0000 UTC m=+1090.079495829" watchObservedRunningTime="2026-02-18 19:52:06.506423503 +0000 UTC m=+1090.088378348" Feb 18 19:52:06 crc kubenswrapper[4932]: I0218 19:52:06.533873 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=15.812928744 podStartE2EDuration="22.533851878s" podCreationTimestamp="2026-02-18 19:51:44 +0000 UTC" firstStartedPulling="2026-02-18 19:51:57.488375521 +0000 UTC m=+1081.070330366" lastFinishedPulling="2026-02-18 19:52:04.209298645 +0000 UTC m=+1087.791253500" observedRunningTime="2026-02-18 19:52:06.518385077 +0000 UTC m=+1090.100339922" watchObservedRunningTime="2026-02-18 19:52:06.533851878 +0000 UTC m=+1090.115806733" Feb 18 19:52:07 crc kubenswrapper[4932]: I0218 19:52:07.480079 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"4a133994-7b33-4db4-a923-5b90d51e47b9","Type":"ContainerStarted","Data":"f215ae9fbece324e9bd723a56e0e71d31c81d0090f9fea3975b162ab4d64e974"} Feb 18 19:52:07 crc kubenswrapper[4932]: I0218 19:52:07.483770 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lvg9q" event={"ID":"ca19a8de-2aaf-459e-bfcd-d73a819558b0","Type":"ContainerStarted","Data":"0ec488814f1030f7573530fc2f4023391364e9ba2569befe8796d3527f51d952"} Feb 18 19:52:07 crc kubenswrapper[4932]: I0218 19:52:07.488909 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cd547864-4d03-45ae-8bb1-10a360d36599","Type":"ContainerStarted","Data":"7410562445bbd85ecddd8f8fa1c64974cd82f5bccf5b814dba01368f2c897a68"} Feb 18 19:52:07 crc kubenswrapper[4932]: I0218 19:52:07.492287 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c","Type":"ContainerStarted","Data":"9b22c88fcfefc922bca187e413f9cbdc5c39e702add0f5abab74ad8e01c84d8d"} Feb 18 19:52:08 crc kubenswrapper[4932]: I0218 19:52:08.503730 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"09fcfda8-434e-4759-81cc-47304cbbe9d3","Type":"ContainerStarted","Data":"29a4673339d988e2170beffb22f8b3b5cea2d6bfb31b090d279613c44a90caec"} Feb 18 19:52:08 crc kubenswrapper[4932]: I0218 19:52:08.505552 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cf98dd42-289f-43fa-b4dc-c6ff814a3c25","Type":"ContainerStarted","Data":"e180b06fd671083f79001b0061a303617d1566914909b796e6dc37109bc742cf"} Feb 18 19:52:08 crc kubenswrapper[4932]: I0218 19:52:08.508298 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lvg9q" event={"ID":"ca19a8de-2aaf-459e-bfcd-d73a819558b0","Type":"ContainerStarted","Data":"b7decf7f9f198ad146d800a055d0acbf836193fadb8fcea920110be23346445d"} Feb 18 19:52:08 crc kubenswrapper[4932]: I0218 19:52:08.508572 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:52:08 crc kubenswrapper[4932]: I0218 19:52:08.508803 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:52:08 crc kubenswrapper[4932]: I0218 19:52:08.510327 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f9432af7-4713-4805-b822-efcb8b1fb21d","Type":"ContainerStarted","Data":"0e5edbbdc7dc9506430e0fb5a39f4970d5b46e1fee4aea0a909a6b7d12a0a541"} Feb 18 19:52:08 crc kubenswrapper[4932]: I0218 19:52:08.525273 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=9.215887081 podStartE2EDuration="19.525258864s" podCreationTimestamp="2026-02-18 19:51:49 +0000 UTC" firstStartedPulling="2026-02-18 19:51:57.490242437 +0000 UTC m=+1081.072197272" lastFinishedPulling="2026-02-18 19:52:07.79961422 +0000 UTC m=+1091.381569055" observedRunningTime="2026-02-18 19:52:08.520804025 +0000 UTC m=+1092.102758890" watchObservedRunningTime="2026-02-18 19:52:08.525258864 +0000 UTC m=+1092.107213709" Feb 18 19:52:08 crc kubenswrapper[4932]: I0218 19:52:08.552041 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-lvg9q" podStartSLOduration=14.352921679 podStartE2EDuration="19.552013744s" podCreationTimestamp="2026-02-18 19:51:49 +0000 UTC" firstStartedPulling="2026-02-18 19:51:59.109643089 +0000 UTC m=+1082.691597944" lastFinishedPulling="2026-02-18 19:52:04.308735164 +0000 UTC m=+1087.890690009" observedRunningTime="2026-02-18 19:52:08.541346131 +0000 UTC m=+1092.123300986" watchObservedRunningTime="2026-02-18 19:52:08.552013744 +0000 UTC m=+1092.133968619" Feb 18 19:52:08 crc kubenswrapper[4932]: I0218 19:52:08.592511 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=5.378848186 podStartE2EDuration="15.592497201s" podCreationTimestamp="2026-02-18 19:51:53 +0000 UTC" firstStartedPulling="2026-02-18 19:51:57.580727886 +0000 UTC m=+1081.162682731" lastFinishedPulling="2026-02-18 19:52:07.794376891 +0000 UTC m=+1091.376331746" observedRunningTime="2026-02-18 19:52:08.585545 +0000 UTC m=+1092.167499845" watchObservedRunningTime="2026-02-18 19:52:08.592497201 +0000 UTC m=+1092.174452046" Feb 18 19:52:09 crc kubenswrapper[4932]: I0218 19:52:09.730521 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 18 19:52:09 crc kubenswrapper[4932]: I0218 19:52:09.731069 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 18 19:52:09 crc kubenswrapper[4932]: I0218 19:52:09.774222 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.248332 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.394873 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.569346 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.623335 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.687075 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b9746b6c-vpbf8"] Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.687766 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" podUID="ca226f67-28b6-4585-a6ed-7d4394cc2a15" containerName="dnsmasq-dns" containerID="cri-o://f78e93787c858fedfe99c1dd7d8972562c24e9bb3a02409a5c96dd964e9e2d1b" gracePeriod=10 Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.802413 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59f5bc659f-6cmgn"] Feb 18 19:52:10 crc kubenswrapper[4932]: E0218 19:52:10.804890 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab397921-9519-48e8-a5c0-5c388d54b6cd" containerName="init" Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.805024 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab397921-9519-48e8-a5c0-5c388d54b6cd" containerName="init" Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.805327 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab397921-9519-48e8-a5c0-5c388d54b6cd" containerName="init" Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.806476 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.809603 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.842620 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59f5bc659f-6cmgn"] Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.937201 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-dns-svc\") pod \"dnsmasq-dns-59f5bc659f-6cmgn\" (UID: \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\") " pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.937250 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpjjp\" (UniqueName: \"kubernetes.io/projected/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-kube-api-access-rpjjp\") pod \"dnsmasq-dns-59f5bc659f-6cmgn\" (UID: \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\") " pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.937277 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-config\") pod \"dnsmasq-dns-59f5bc659f-6cmgn\" (UID: \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\") " pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.937421 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-ovsdbserver-sb\") pod \"dnsmasq-dns-59f5bc659f-6cmgn\" (UID: \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\") " pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.998028 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-dhv68"] Feb 18 19:52:10 crc kubenswrapper[4932]: I0218 19:52:10.999347 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.001996 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.031199 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-dhv68"] Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.042471 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-dns-svc\") pod \"dnsmasq-dns-59f5bc659f-6cmgn\" (UID: \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\") " pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.042545 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpjjp\" (UniqueName: \"kubernetes.io/projected/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-kube-api-access-rpjjp\") pod \"dnsmasq-dns-59f5bc659f-6cmgn\" (UID: \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\") " pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.042594 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-config\") pod \"dnsmasq-dns-59f5bc659f-6cmgn\" (UID: \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\") " pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.042640 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-ovsdbserver-sb\") pod \"dnsmasq-dns-59f5bc659f-6cmgn\" (UID: \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\") " pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.043680 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-ovsdbserver-sb\") pod \"dnsmasq-dns-59f5bc659f-6cmgn\" (UID: \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\") " pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.044348 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-dns-svc\") pod \"dnsmasq-dns-59f5bc659f-6cmgn\" (UID: \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\") " pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.049333 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-config\") pod \"dnsmasq-dns-59f5bc659f-6cmgn\" (UID: \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\") " pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.071575 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpjjp\" (UniqueName: \"kubernetes.io/projected/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-kube-api-access-rpjjp\") pod \"dnsmasq-dns-59f5bc659f-6cmgn\" (UID: \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\") " pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.144232 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8abfc229-97bd-4301-aeca-808c88209da4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.144291 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgww2\" (UniqueName: \"kubernetes.io/projected/8abfc229-97bd-4301-aeca-808c88209da4-kube-api-access-hgww2\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.144313 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8abfc229-97bd-4301-aeca-808c88209da4-combined-ca-bundle\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.144353 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8abfc229-97bd-4301-aeca-808c88209da4-config\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.144378 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8abfc229-97bd-4301-aeca-808c88209da4-ovs-rundir\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.144417 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8abfc229-97bd-4301-aeca-808c88209da4-ovn-rundir\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.149147 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.246073 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8abfc229-97bd-4301-aeca-808c88209da4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.246415 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgww2\" (UniqueName: \"kubernetes.io/projected/8abfc229-97bd-4301-aeca-808c88209da4-kube-api-access-hgww2\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.246436 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8abfc229-97bd-4301-aeca-808c88209da4-combined-ca-bundle\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.246789 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8abfc229-97bd-4301-aeca-808c88209da4-config\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.246828 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8abfc229-97bd-4301-aeca-808c88209da4-ovs-rundir\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.246901 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8abfc229-97bd-4301-aeca-808c88209da4-ovn-rundir\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.247213 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8abfc229-97bd-4301-aeca-808c88209da4-ovn-rundir\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.247897 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8abfc229-97bd-4301-aeca-808c88209da4-config\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.247969 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8abfc229-97bd-4301-aeca-808c88209da4-ovs-rundir\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.250485 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8abfc229-97bd-4301-aeca-808c88209da4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.252114 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8abfc229-97bd-4301-aeca-808c88209da4-combined-ca-bundle\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.252758 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.267915 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgww2\" (UniqueName: \"kubernetes.io/projected/8abfc229-97bd-4301-aeca-808c88209da4-kube-api-access-hgww2\") pod \"ovn-controller-metrics-dhv68\" (UID: \"8abfc229-97bd-4301-aeca-808c88209da4\") " pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.310779 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59f5bc659f-6cmgn"] Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.318514 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-dhv68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.336238 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5455f77d45-n7wh5"] Feb 18 19:52:11 crc kubenswrapper[4932]: E0218 19:52:11.336805 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca226f67-28b6-4585-a6ed-7d4394cc2a15" containerName="init" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.336839 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca226f67-28b6-4585-a6ed-7d4394cc2a15" containerName="init" Feb 18 19:52:11 crc kubenswrapper[4932]: E0218 19:52:11.336878 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca226f67-28b6-4585-a6ed-7d4394cc2a15" containerName="dnsmasq-dns" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.336889 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca226f67-28b6-4585-a6ed-7d4394cc2a15" containerName="dnsmasq-dns" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.337183 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca226f67-28b6-4585-a6ed-7d4394cc2a15" containerName="dnsmasq-dns" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.338505 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.341756 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.347851 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca226f67-28b6-4585-a6ed-7d4394cc2a15-dns-svc\") pod \"ca226f67-28b6-4585-a6ed-7d4394cc2a15\" (UID: \"ca226f67-28b6-4585-a6ed-7d4394cc2a15\") " Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.347921 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5455f77d45-n7wh5"] Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.348012 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca226f67-28b6-4585-a6ed-7d4394cc2a15-config\") pod \"ca226f67-28b6-4585-a6ed-7d4394cc2a15\" (UID: \"ca226f67-28b6-4585-a6ed-7d4394cc2a15\") " Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.348075 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgghm\" (UniqueName: \"kubernetes.io/projected/ca226f67-28b6-4585-a6ed-7d4394cc2a15-kube-api-access-jgghm\") pod \"ca226f67-28b6-4585-a6ed-7d4394cc2a15\" (UID: \"ca226f67-28b6-4585-a6ed-7d4394cc2a15\") " Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.363291 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca226f67-28b6-4585-a6ed-7d4394cc2a15-kube-api-access-jgghm" (OuterVolumeSpecName: "kube-api-access-jgghm") pod "ca226f67-28b6-4585-a6ed-7d4394cc2a15" (UID: "ca226f67-28b6-4585-a6ed-7d4394cc2a15"). InnerVolumeSpecName "kube-api-access-jgghm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.395445 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.411238 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca226f67-28b6-4585-a6ed-7d4394cc2a15-config" (OuterVolumeSpecName: "config") pod "ca226f67-28b6-4585-a6ed-7d4394cc2a15" (UID: "ca226f67-28b6-4585-a6ed-7d4394cc2a15"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.411644 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca226f67-28b6-4585-a6ed-7d4394cc2a15-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ca226f67-28b6-4585-a6ed-7d4394cc2a15" (UID: "ca226f67-28b6-4585-a6ed-7d4394cc2a15"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.446078 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.450295 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-config\") pod \"dnsmasq-dns-5455f77d45-n7wh5\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.450350 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-ovsdbserver-sb\") pod \"dnsmasq-dns-5455f77d45-n7wh5\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.450385 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6mhr\" (UniqueName: \"kubernetes.io/projected/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-kube-api-access-p6mhr\") pod \"dnsmasq-dns-5455f77d45-n7wh5\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.450423 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-dns-svc\") pod \"dnsmasq-dns-5455f77d45-n7wh5\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.450438 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-ovsdbserver-nb\") pod \"dnsmasq-dns-5455f77d45-n7wh5\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.450525 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca226f67-28b6-4585-a6ed-7d4394cc2a15-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.450538 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca226f67-28b6-4585-a6ed-7d4394cc2a15-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.450548 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgghm\" (UniqueName: \"kubernetes.io/projected/ca226f67-28b6-4585-a6ed-7d4394cc2a15-kube-api-access-jgghm\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.537719 4932 generic.go:334] "Generic (PLEG): container finished" podID="ca226f67-28b6-4585-a6ed-7d4394cc2a15" containerID="f78e93787c858fedfe99c1dd7d8972562c24e9bb3a02409a5c96dd964e9e2d1b" exitCode=0 Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.538031 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.538133 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" event={"ID":"ca226f67-28b6-4585-a6ed-7d4394cc2a15","Type":"ContainerDied","Data":"f78e93787c858fedfe99c1dd7d8972562c24e9bb3a02409a5c96dd964e9e2d1b"} Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.538158 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9746b6c-vpbf8" event={"ID":"ca226f67-28b6-4585-a6ed-7d4394cc2a15","Type":"ContainerDied","Data":"43daca4777cee280f31b3e73b817f441991f4957de8e06f5e125fa3c6e27e74a"} Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.538947 4932 scope.go:117] "RemoveContainer" containerID="f78e93787c858fedfe99c1dd7d8972562c24e9bb3a02409a5c96dd964e9e2d1b" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.555856 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-config\") pod \"dnsmasq-dns-5455f77d45-n7wh5\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.555924 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-ovsdbserver-sb\") pod \"dnsmasq-dns-5455f77d45-n7wh5\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.555969 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6mhr\" (UniqueName: \"kubernetes.io/projected/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-kube-api-access-p6mhr\") pod \"dnsmasq-dns-5455f77d45-n7wh5\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.556011 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-dns-svc\") pod \"dnsmasq-dns-5455f77d45-n7wh5\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.556028 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-ovsdbserver-nb\") pod \"dnsmasq-dns-5455f77d45-n7wh5\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.557529 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-dns-svc\") pod \"dnsmasq-dns-5455f77d45-n7wh5\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.558567 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-config\") pod \"dnsmasq-dns-5455f77d45-n7wh5\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.559127 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-ovsdbserver-nb\") pod \"dnsmasq-dns-5455f77d45-n7wh5\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.559637 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-ovsdbserver-sb\") pod \"dnsmasq-dns-5455f77d45-n7wh5\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.589133 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6mhr\" (UniqueName: \"kubernetes.io/projected/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-kube-api-access-p6mhr\") pod \"dnsmasq-dns-5455f77d45-n7wh5\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.600560 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b9746b6c-vpbf8"] Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.603405 4932 scope.go:117] "RemoveContainer" containerID="fb77fa86b478ace9e7c0af1b277313a6540008c5d9f1ff5d57280eec9e6eda68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.604149 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.610978 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b9746b6c-vpbf8"] Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.637903 4932 scope.go:117] "RemoveContainer" containerID="f78e93787c858fedfe99c1dd7d8972562c24e9bb3a02409a5c96dd964e9e2d1b" Feb 18 19:52:11 crc kubenswrapper[4932]: E0218 19:52:11.639654 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f78e93787c858fedfe99c1dd7d8972562c24e9bb3a02409a5c96dd964e9e2d1b\": container with ID starting with f78e93787c858fedfe99c1dd7d8972562c24e9bb3a02409a5c96dd964e9e2d1b not found: ID does not exist" containerID="f78e93787c858fedfe99c1dd7d8972562c24e9bb3a02409a5c96dd964e9e2d1b" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.639693 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f78e93787c858fedfe99c1dd7d8972562c24e9bb3a02409a5c96dd964e9e2d1b"} err="failed to get container status \"f78e93787c858fedfe99c1dd7d8972562c24e9bb3a02409a5c96dd964e9e2d1b\": rpc error: code = NotFound desc = could not find container \"f78e93787c858fedfe99c1dd7d8972562c24e9bb3a02409a5c96dd964e9e2d1b\": container with ID starting with f78e93787c858fedfe99c1dd7d8972562c24e9bb3a02409a5c96dd964e9e2d1b not found: ID does not exist" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.639721 4932 scope.go:117] "RemoveContainer" containerID="fb77fa86b478ace9e7c0af1b277313a6540008c5d9f1ff5d57280eec9e6eda68" Feb 18 19:52:11 crc kubenswrapper[4932]: E0218 19:52:11.640114 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb77fa86b478ace9e7c0af1b277313a6540008c5d9f1ff5d57280eec9e6eda68\": container with ID starting with fb77fa86b478ace9e7c0af1b277313a6540008c5d9f1ff5d57280eec9e6eda68 not found: ID does not exist" containerID="fb77fa86b478ace9e7c0af1b277313a6540008c5d9f1ff5d57280eec9e6eda68" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.640165 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb77fa86b478ace9e7c0af1b277313a6540008c5d9f1ff5d57280eec9e6eda68"} err="failed to get container status \"fb77fa86b478ace9e7c0af1b277313a6540008c5d9f1ff5d57280eec9e6eda68\": rpc error: code = NotFound desc = could not find container \"fb77fa86b478ace9e7c0af1b277313a6540008c5d9f1ff5d57280eec9e6eda68\": container with ID starting with fb77fa86b478ace9e7c0af1b277313a6540008c5d9f1ff5d57280eec9e6eda68 not found: ID does not exist" Feb 18 19:52:11 crc kubenswrapper[4932]: W0218 19:52:11.681921 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded4ea879_727a_4bf0_b18a_3d25d21cd31a.slice/crio-c6dc24ce9364f9c0c0feff99a0c1cf7f9c7cbf1feb5a7fedfb1ea0a4432d1046 WatchSource:0}: Error finding container c6dc24ce9364f9c0c0feff99a0c1cf7f9c7cbf1feb5a7fedfb1ea0a4432d1046: Status 404 returned error can't find the container with id c6dc24ce9364f9c0c0feff99a0c1cf7f9c7cbf1feb5a7fedfb1ea0a4432d1046 Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.682664 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59f5bc659f-6cmgn"] Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.687273 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.784636 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.786321 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.789512 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.789875 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-vktdm" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.790036 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.790265 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.793357 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.862979 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f6fa544-8da9-4404-94a2-c5ea567caa32-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.864366 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f6fa544-8da9-4404-94a2-c5ea567caa32-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.864487 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f6fa544-8da9-4404-94a2-c5ea567caa32-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.864608 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f6fa544-8da9-4404-94a2-c5ea567caa32-scripts\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.864698 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7f6fa544-8da9-4404-94a2-c5ea567caa32-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.864782 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f6fa544-8da9-4404-94a2-c5ea567caa32-config\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.864946 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59fdg\" (UniqueName: \"kubernetes.io/projected/7f6fa544-8da9-4404-94a2-c5ea567caa32-kube-api-access-59fdg\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.900369 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-dhv68"] Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.965940 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f6fa544-8da9-4404-94a2-c5ea567caa32-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.966000 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f6fa544-8da9-4404-94a2-c5ea567caa32-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.966025 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f6fa544-8da9-4404-94a2-c5ea567caa32-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.966058 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f6fa544-8da9-4404-94a2-c5ea567caa32-scripts\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.966081 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7f6fa544-8da9-4404-94a2-c5ea567caa32-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.966101 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f6fa544-8da9-4404-94a2-c5ea567caa32-config\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.966147 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59fdg\" (UniqueName: \"kubernetes.io/projected/7f6fa544-8da9-4404-94a2-c5ea567caa32-kube-api-access-59fdg\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.967615 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7f6fa544-8da9-4404-94a2-c5ea567caa32-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.967804 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f6fa544-8da9-4404-94a2-c5ea567caa32-scripts\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.967876 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f6fa544-8da9-4404-94a2-c5ea567caa32-config\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.970287 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f6fa544-8da9-4404-94a2-c5ea567caa32-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.971658 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f6fa544-8da9-4404-94a2-c5ea567caa32-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.979339 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f6fa544-8da9-4404-94a2-c5ea567caa32-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:11 crc kubenswrapper[4932]: I0218 19:52:11.981428 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59fdg\" (UniqueName: \"kubernetes.io/projected/7f6fa544-8da9-4404-94a2-c5ea567caa32-kube-api-access-59fdg\") pod \"ovn-northd-0\" (UID: \"7f6fa544-8da9-4404-94a2-c5ea567caa32\") " pod="openstack/ovn-northd-0" Feb 18 19:52:12 crc kubenswrapper[4932]: I0218 19:52:12.063301 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 18 19:52:12 crc kubenswrapper[4932]: I0218 19:52:12.191078 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5455f77d45-n7wh5"] Feb 18 19:52:12 crc kubenswrapper[4932]: W0218 19:52:12.196942 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1a9f909_edc1_4196_8a7e_8d9195ac8c0a.slice/crio-dd2b4674c44b33daae26458342de18c2079f458059fcfdba587ede237c929e79 WatchSource:0}: Error finding container dd2b4674c44b33daae26458342de18c2079f458059fcfdba587ede237c929e79: Status 404 returned error can't find the container with id dd2b4674c44b33daae26458342de18c2079f458059fcfdba587ede237c929e79 Feb 18 19:52:12 crc kubenswrapper[4932]: I0218 19:52:12.497898 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 18 19:52:12 crc kubenswrapper[4932]: W0218 19:52:12.500455 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f6fa544_8da9_4404_94a2_c5ea567caa32.slice/crio-f8609eb96da20174b75c16a7d423cdd05fcfcc5272d073c967e12f6aa5b86ba2 WatchSource:0}: Error finding container f8609eb96da20174b75c16a7d423cdd05fcfcc5272d073c967e12f6aa5b86ba2: Status 404 returned error can't find the container with id f8609eb96da20174b75c16a7d423cdd05fcfcc5272d073c967e12f6aa5b86ba2 Feb 18 19:52:12 crc kubenswrapper[4932]: I0218 19:52:12.546045 4932 generic.go:334] "Generic (PLEG): container finished" podID="c1a9f909-edc1-4196-8a7e-8d9195ac8c0a" containerID="f4d207255e04261f1256876d0640f7a397c12534642315fef1d1773ac5c24dd5" exitCode=0 Feb 18 19:52:12 crc kubenswrapper[4932]: I0218 19:52:12.546304 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" event={"ID":"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a","Type":"ContainerDied","Data":"f4d207255e04261f1256876d0640f7a397c12534642315fef1d1773ac5c24dd5"} Feb 18 19:52:12 crc kubenswrapper[4932]: I0218 19:52:12.546358 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" event={"ID":"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a","Type":"ContainerStarted","Data":"dd2b4674c44b33daae26458342de18c2079f458059fcfdba587ede237c929e79"} Feb 18 19:52:12 crc kubenswrapper[4932]: I0218 19:52:12.548681 4932 generic.go:334] "Generic (PLEG): container finished" podID="ed4ea879-727a-4bf0-b18a-3d25d21cd31a" containerID="85699d80211e1e6e06ae8a51e70269eb8105b39190aa91df9a7ef0cc4c6f3ba5" exitCode=0 Feb 18 19:52:12 crc kubenswrapper[4932]: I0218 19:52:12.548742 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" event={"ID":"ed4ea879-727a-4bf0-b18a-3d25d21cd31a","Type":"ContainerDied","Data":"85699d80211e1e6e06ae8a51e70269eb8105b39190aa91df9a7ef0cc4c6f3ba5"} Feb 18 19:52:12 crc kubenswrapper[4932]: I0218 19:52:12.548933 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" event={"ID":"ed4ea879-727a-4bf0-b18a-3d25d21cd31a","Type":"ContainerStarted","Data":"c6dc24ce9364f9c0c0feff99a0c1cf7f9c7cbf1feb5a7fedfb1ea0a4432d1046"} Feb 18 19:52:12 crc kubenswrapper[4932]: I0218 19:52:12.554946 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7f6fa544-8da9-4404-94a2-c5ea567caa32","Type":"ContainerStarted","Data":"f8609eb96da20174b75c16a7d423cdd05fcfcc5272d073c967e12f6aa5b86ba2"} Feb 18 19:52:12 crc kubenswrapper[4932]: I0218 19:52:12.557558 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-dhv68" event={"ID":"8abfc229-97bd-4301-aeca-808c88209da4","Type":"ContainerStarted","Data":"8cd5bbb9abdc963fcdfa397efcc891dccf8d1b9ec75b11ff528ab2dd69c95bd3"} Feb 18 19:52:12 crc kubenswrapper[4932]: I0218 19:52:12.557642 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-dhv68" event={"ID":"8abfc229-97bd-4301-aeca-808c88209da4","Type":"ContainerStarted","Data":"712b10cab6f249c2fc4db604b074472bcc7fcd90b45e46959c4f67a35124d051"} Feb 18 19:52:12 crc kubenswrapper[4932]: I0218 19:52:12.611951 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-dhv68" podStartSLOduration=2.611900386 podStartE2EDuration="2.611900386s" podCreationTimestamp="2026-02-18 19:52:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:52:12.603213752 +0000 UTC m=+1096.185168597" watchObservedRunningTime="2026-02-18 19:52:12.611900386 +0000 UTC m=+1096.193855231" Feb 18 19:52:12 crc kubenswrapper[4932]: I0218 19:52:12.982056 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.085987 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpjjp\" (UniqueName: \"kubernetes.io/projected/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-kube-api-access-rpjjp\") pod \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\" (UID: \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\") " Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.086062 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-dns-svc\") pod \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\" (UID: \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\") " Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.086138 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-ovsdbserver-sb\") pod \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\" (UID: \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\") " Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.086743 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-config\") pod \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\" (UID: \"ed4ea879-727a-4bf0-b18a-3d25d21cd31a\") " Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.093482 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-kube-api-access-rpjjp" (OuterVolumeSpecName: "kube-api-access-rpjjp") pod "ed4ea879-727a-4bf0-b18a-3d25d21cd31a" (UID: "ed4ea879-727a-4bf0-b18a-3d25d21cd31a"). InnerVolumeSpecName "kube-api-access-rpjjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.104644 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-config" (OuterVolumeSpecName: "config") pod "ed4ea879-727a-4bf0-b18a-3d25d21cd31a" (UID: "ed4ea879-727a-4bf0-b18a-3d25d21cd31a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.114614 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ed4ea879-727a-4bf0-b18a-3d25d21cd31a" (UID: "ed4ea879-727a-4bf0-b18a-3d25d21cd31a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.142460 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ed4ea879-727a-4bf0-b18a-3d25d21cd31a" (UID: "ed4ea879-727a-4bf0-b18a-3d25d21cd31a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.188717 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpjjp\" (UniqueName: \"kubernetes.io/projected/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-kube-api-access-rpjjp\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.189070 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.189099 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.189116 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed4ea879-727a-4bf0-b18a-3d25d21cd31a-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.191774 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca226f67-28b6-4585-a6ed-7d4394cc2a15" path="/var/lib/kubelet/pods/ca226f67-28b6-4585-a6ed-7d4394cc2a15/volumes" Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.566373 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" event={"ID":"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a","Type":"ContainerStarted","Data":"a3982f76fc3e004ac4e2e07f7087521b525063221b76493c343ca26001938a86"} Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.567331 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.569977 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.570475 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59f5bc659f-6cmgn" event={"ID":"ed4ea879-727a-4bf0-b18a-3d25d21cd31a","Type":"ContainerDied","Data":"c6dc24ce9364f9c0c0feff99a0c1cf7f9c7cbf1feb5a7fedfb1ea0a4432d1046"} Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.570506 4932 scope.go:117] "RemoveContainer" containerID="85699d80211e1e6e06ae8a51e70269eb8105b39190aa91df9a7ef0cc4c6f3ba5" Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.588120 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" podStartSLOduration=2.588096364 podStartE2EDuration="2.588096364s" podCreationTimestamp="2026-02-18 19:52:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:52:13.587330555 +0000 UTC m=+1097.169285420" watchObservedRunningTime="2026-02-18 19:52:13.588096364 +0000 UTC m=+1097.170051219" Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.629305 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59f5bc659f-6cmgn"] Feb 18 19:52:13 crc kubenswrapper[4932]: I0218 19:52:13.636109 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59f5bc659f-6cmgn"] Feb 18 19:52:14 crc kubenswrapper[4932]: I0218 19:52:14.577876 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7f6fa544-8da9-4404-94a2-c5ea567caa32","Type":"ContainerStarted","Data":"66aab2e3af98dc2f65a6dc3564fb435262e14c83f3526943ce4f75c072c3886a"} Feb 18 19:52:14 crc kubenswrapper[4932]: I0218 19:52:14.580117 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 18 19:52:14 crc kubenswrapper[4932]: I0218 19:52:14.580129 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7f6fa544-8da9-4404-94a2-c5ea567caa32","Type":"ContainerStarted","Data":"af938aaf272e82338271325c688c986bffaf4cdc3c88c8253e481cbc4c3d5cd7"} Feb 18 19:52:14 crc kubenswrapper[4932]: I0218 19:52:14.581560 4932 generic.go:334] "Generic (PLEG): container finished" podID="d9dd7155-a814-4ae0-92b9-6e71461473d5" containerID="6b00a32228a57fd318a20db1c61c54b5b976211c74483d59bcbd1b6e0cb1c8ff" exitCode=0 Feb 18 19:52:14 crc kubenswrapper[4932]: I0218 19:52:14.581634 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d9dd7155-a814-4ae0-92b9-6e71461473d5","Type":"ContainerDied","Data":"6b00a32228a57fd318a20db1c61c54b5b976211c74483d59bcbd1b6e0cb1c8ff"} Feb 18 19:52:14 crc kubenswrapper[4932]: I0218 19:52:14.608091 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.65409031 podStartE2EDuration="3.608073651s" podCreationTimestamp="2026-02-18 19:52:11 +0000 UTC" firstStartedPulling="2026-02-18 19:52:12.502792408 +0000 UTC m=+1096.084747253" lastFinishedPulling="2026-02-18 19:52:13.456775759 +0000 UTC m=+1097.038730594" observedRunningTime="2026-02-18 19:52:14.598588567 +0000 UTC m=+1098.180543412" watchObservedRunningTime="2026-02-18 19:52:14.608073651 +0000 UTC m=+1098.190028496" Feb 18 19:52:14 crc kubenswrapper[4932]: I0218 19:52:14.724690 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 18 19:52:15 crc kubenswrapper[4932]: I0218 19:52:15.192388 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed4ea879-727a-4bf0-b18a-3d25d21cd31a" path="/var/lib/kubelet/pods/ed4ea879-727a-4bf0-b18a-3d25d21cd31a/volumes" Feb 18 19:52:15 crc kubenswrapper[4932]: I0218 19:52:15.594436 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d9dd7155-a814-4ae0-92b9-6e71461473d5","Type":"ContainerStarted","Data":"8178a923b73598beccaa3903f2c974da013c4273b78c68f62efb4d7cc0fa4624"} Feb 18 19:52:15 crc kubenswrapper[4932]: I0218 19:52:15.597032 4932 generic.go:334] "Generic (PLEG): container finished" podID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerID="e180b06fd671083f79001b0061a303617d1566914909b796e6dc37109bc742cf" exitCode=0 Feb 18 19:52:15 crc kubenswrapper[4932]: I0218 19:52:15.597079 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cf98dd42-289f-43fa-b4dc-c6ff814a3c25","Type":"ContainerDied","Data":"e180b06fd671083f79001b0061a303617d1566914909b796e6dc37109bc742cf"} Feb 18 19:52:15 crc kubenswrapper[4932]: I0218 19:52:15.637017 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=24.973940996 podStartE2EDuration="32.636990257s" podCreationTimestamp="2026-02-18 19:51:43 +0000 UTC" firstStartedPulling="2026-02-18 19:51:56.883739597 +0000 UTC m=+1080.465694442" lastFinishedPulling="2026-02-18 19:52:04.546788818 +0000 UTC m=+1088.128743703" observedRunningTime="2026-02-18 19:52:15.626606861 +0000 UTC m=+1099.208561746" watchObservedRunningTime="2026-02-18 19:52:15.636990257 +0000 UTC m=+1099.218945142" Feb 18 19:52:16 crc kubenswrapper[4932]: I0218 19:52:16.607010 4932 generic.go:334] "Generic (PLEG): container finished" podID="915c727d-cb48-4649-bd71-30a5edf798d5" containerID="ab4a1ef53fc46a73ae038e80a0b9f945d129feb61eaa7b9bb7cc45f2dc7ef05f" exitCode=0 Feb 18 19:52:16 crc kubenswrapper[4932]: I0218 19:52:16.607078 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"915c727d-cb48-4649-bd71-30a5edf798d5","Type":"ContainerDied","Data":"ab4a1ef53fc46a73ae038e80a0b9f945d129feb61eaa7b9bb7cc45f2dc7ef05f"} Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.204060 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5455f77d45-n7wh5"] Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.204880 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" podUID="c1a9f909-edc1-4196-8a7e-8d9195ac8c0a" containerName="dnsmasq-dns" containerID="cri-o://a3982f76fc3e004ac4e2e07f7087521b525063221b76493c343ca26001938a86" gracePeriod=10 Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.207765 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.209806 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.228638 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d589bd999-klfsc"] Feb 18 19:52:17 crc kubenswrapper[4932]: E0218 19:52:17.228975 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed4ea879-727a-4bf0-b18a-3d25d21cd31a" containerName="init" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.228990 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed4ea879-727a-4bf0-b18a-3d25d21cd31a" containerName="init" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.229154 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed4ea879-727a-4bf0-b18a-3d25d21cd31a" containerName="init" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.238221 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.263038 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d589bd999-klfsc"] Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.364207 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-config\") pod \"dnsmasq-dns-7d589bd999-klfsc\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.364261 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-ovsdbserver-nb\") pod \"dnsmasq-dns-7d589bd999-klfsc\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.364579 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-ovsdbserver-sb\") pod \"dnsmasq-dns-7d589bd999-klfsc\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.364613 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2b4k\" (UniqueName: \"kubernetes.io/projected/93b88bfc-e293-4af3-a085-184607bf9327-kube-api-access-j2b4k\") pod \"dnsmasq-dns-7d589bd999-klfsc\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.364664 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-dns-svc\") pod \"dnsmasq-dns-7d589bd999-klfsc\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.466730 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-ovsdbserver-sb\") pod \"dnsmasq-dns-7d589bd999-klfsc\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.466852 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2b4k\" (UniqueName: \"kubernetes.io/projected/93b88bfc-e293-4af3-a085-184607bf9327-kube-api-access-j2b4k\") pod \"dnsmasq-dns-7d589bd999-klfsc\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.466941 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-dns-svc\") pod \"dnsmasq-dns-7d589bd999-klfsc\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.466983 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-config\") pod \"dnsmasq-dns-7d589bd999-klfsc\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.467021 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-ovsdbserver-nb\") pod \"dnsmasq-dns-7d589bd999-klfsc\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.467806 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-ovsdbserver-sb\") pod \"dnsmasq-dns-7d589bd999-klfsc\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.468968 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-ovsdbserver-nb\") pod \"dnsmasq-dns-7d589bd999-klfsc\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.469782 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-dns-svc\") pod \"dnsmasq-dns-7d589bd999-klfsc\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.469847 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-config\") pod \"dnsmasq-dns-7d589bd999-klfsc\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.493404 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2b4k\" (UniqueName: \"kubernetes.io/projected/93b88bfc-e293-4af3-a085-184607bf9327-kube-api-access-j2b4k\") pod \"dnsmasq-dns-7d589bd999-klfsc\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.614491 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.617840 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"915c727d-cb48-4649-bd71-30a5edf798d5","Type":"ContainerStarted","Data":"cfffd12e2db31f1f4eed158bad3236b5de7cd7cf1024afe930a98a37d235a483"} Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.621787 4932 generic.go:334] "Generic (PLEG): container finished" podID="c1a9f909-edc1-4196-8a7e-8d9195ac8c0a" containerID="a3982f76fc3e004ac4e2e07f7087521b525063221b76493c343ca26001938a86" exitCode=0 Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.621835 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" event={"ID":"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a","Type":"ContainerDied","Data":"a3982f76fc3e004ac4e2e07f7087521b525063221b76493c343ca26001938a86"} Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.625781 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.653731 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=29.157460115 podStartE2EDuration="36.653708277s" podCreationTimestamp="2026-02-18 19:51:41 +0000 UTC" firstStartedPulling="2026-02-18 19:51:56.888883614 +0000 UTC m=+1080.470838459" lastFinishedPulling="2026-02-18 19:52:04.385131746 +0000 UTC m=+1087.967086621" observedRunningTime="2026-02-18 19:52:17.642372678 +0000 UTC m=+1101.224327543" watchObservedRunningTime="2026-02-18 19:52:17.653708277 +0000 UTC m=+1101.235663132" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.774182 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-dns-svc\") pod \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.774342 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-ovsdbserver-nb\") pod \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.774389 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-config\") pod \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.775252 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-ovsdbserver-sb\") pod \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.775345 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6mhr\" (UniqueName: \"kubernetes.io/projected/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-kube-api-access-p6mhr\") pod \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\" (UID: \"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a\") " Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.786383 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-kube-api-access-p6mhr" (OuterVolumeSpecName: "kube-api-access-p6mhr") pod "c1a9f909-edc1-4196-8a7e-8d9195ac8c0a" (UID: "c1a9f909-edc1-4196-8a7e-8d9195ac8c0a"). InnerVolumeSpecName "kube-api-access-p6mhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.841037 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-config" (OuterVolumeSpecName: "config") pod "c1a9f909-edc1-4196-8a7e-8d9195ac8c0a" (UID: "c1a9f909-edc1-4196-8a7e-8d9195ac8c0a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.862799 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c1a9f909-edc1-4196-8a7e-8d9195ac8c0a" (UID: "c1a9f909-edc1-4196-8a7e-8d9195ac8c0a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.877132 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6mhr\" (UniqueName: \"kubernetes.io/projected/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-kube-api-access-p6mhr\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.877161 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.877210 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.901574 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c1a9f909-edc1-4196-8a7e-8d9195ac8c0a" (UID: "c1a9f909-edc1-4196-8a7e-8d9195ac8c0a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.911581 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c1a9f909-edc1-4196-8a7e-8d9195ac8c0a" (UID: "c1a9f909-edc1-4196-8a7e-8d9195ac8c0a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.980490 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:17 crc kubenswrapper[4932]: I0218 19:52:17.980525 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.145975 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d589bd999-klfsc"] Feb 18 19:52:18 crc kubenswrapper[4932]: W0218 19:52:18.151665 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93b88bfc_e293_4af3_a085_184607bf9327.slice/crio-4479d0a19d18775cbdda9e3e29eb2fb3a08c6720c8c950eb49addc462844cb3a WatchSource:0}: Error finding container 4479d0a19d18775cbdda9e3e29eb2fb3a08c6720c8c950eb49addc462844cb3a: Status 404 returned error can't find the container with id 4479d0a19d18775cbdda9e3e29eb2fb3a08c6720c8c950eb49addc462844cb3a Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.437033 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 18 19:52:18 crc kubenswrapper[4932]: E0218 19:52:18.437628 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1a9f909-edc1-4196-8a7e-8d9195ac8c0a" containerName="dnsmasq-dns" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.437643 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1a9f909-edc1-4196-8a7e-8d9195ac8c0a" containerName="dnsmasq-dns" Feb 18 19:52:18 crc kubenswrapper[4932]: E0218 19:52:18.437672 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1a9f909-edc1-4196-8a7e-8d9195ac8c0a" containerName="init" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.437678 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1a9f909-edc1-4196-8a7e-8d9195ac8c0a" containerName="init" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.437836 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1a9f909-edc1-4196-8a7e-8d9195ac8c0a" containerName="dnsmasq-dns" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.446188 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.451295 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.451559 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-28sds" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.451576 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.452033 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.469138 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.595716 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-cache\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.595761 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-lock\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.595779 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.595819 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.595837 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.595888 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9dxh\" (UniqueName: \"kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-kube-api-access-v9dxh\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.631025 4932 generic.go:334] "Generic (PLEG): container finished" podID="93b88bfc-e293-4af3-a085-184607bf9327" containerID="a93d81ac35fef706c2981873bb26c1272af93758393d8995dcc39345d9e18399" exitCode=0 Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.631202 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d589bd999-klfsc" event={"ID":"93b88bfc-e293-4af3-a085-184607bf9327","Type":"ContainerDied","Data":"a93d81ac35fef706c2981873bb26c1272af93758393d8995dcc39345d9e18399"} Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.632039 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d589bd999-klfsc" event={"ID":"93b88bfc-e293-4af3-a085-184607bf9327","Type":"ContainerStarted","Data":"4479d0a19d18775cbdda9e3e29eb2fb3a08c6720c8c950eb49addc462844cb3a"} Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.639592 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" event={"ID":"c1a9f909-edc1-4196-8a7e-8d9195ac8c0a","Type":"ContainerDied","Data":"dd2b4674c44b33daae26458342de18c2079f458059fcfdba587ede237c929e79"} Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.639637 4932 scope.go:117] "RemoveContainer" containerID="a3982f76fc3e004ac4e2e07f7087521b525063221b76493c343ca26001938a86" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.639925 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5455f77d45-n7wh5" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.697378 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9dxh\" (UniqueName: \"kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-kube-api-access-v9dxh\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.697477 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-cache\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.697501 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-lock\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.697518 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.697566 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.697592 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.698071 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-lock\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.698410 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.698814 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-cache\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: E0218 19:52:18.699335 4932 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:52:18 crc kubenswrapper[4932]: E0218 19:52:18.699348 4932 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:52:18 crc kubenswrapper[4932]: E0218 19:52:18.699392 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift podName:c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5 nodeName:}" failed. No retries permitted until 2026-02-18 19:52:19.199377805 +0000 UTC m=+1102.781332650 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift") pod "swift-storage-0" (UID: "c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5") : configmap "swift-ring-files" not found Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.711605 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.716059 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9dxh\" (UniqueName: \"kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-kube-api-access-v9dxh\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.718494 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.835291 4932 scope.go:117] "RemoveContainer" containerID="f4d207255e04261f1256876d0640f7a397c12534642315fef1d1773ac5c24dd5" Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.849602 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5455f77d45-n7wh5"] Feb 18 19:52:18 crc kubenswrapper[4932]: I0218 19:52:18.863030 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5455f77d45-n7wh5"] Feb 18 19:52:19 crc kubenswrapper[4932]: I0218 19:52:19.189893 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1a9f909-edc1-4196-8a7e-8d9195ac8c0a" path="/var/lib/kubelet/pods/c1a9f909-edc1-4196-8a7e-8d9195ac8c0a/volumes" Feb 18 19:52:19 crc kubenswrapper[4932]: I0218 19:52:19.210420 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:19 crc kubenswrapper[4932]: E0218 19:52:19.210620 4932 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:52:19 crc kubenswrapper[4932]: E0218 19:52:19.210742 4932 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:52:19 crc kubenswrapper[4932]: E0218 19:52:19.210803 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift podName:c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5 nodeName:}" failed. No retries permitted until 2026-02-18 19:52:20.210783918 +0000 UTC m=+1103.792738763 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift") pod "swift-storage-0" (UID: "c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5") : configmap "swift-ring-files" not found Feb 18 19:52:19 crc kubenswrapper[4932]: I0218 19:52:19.647630 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d589bd999-klfsc" event={"ID":"93b88bfc-e293-4af3-a085-184607bf9327","Type":"ContainerStarted","Data":"f52951b30b5592f2aeb5eae2773bb2ba20887b8705143fd09cf41ec26c0f786e"} Feb 18 19:52:19 crc kubenswrapper[4932]: I0218 19:52:19.648800 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:19 crc kubenswrapper[4932]: I0218 19:52:19.671216 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d589bd999-klfsc" podStartSLOduration=2.671192375 podStartE2EDuration="2.671192375s" podCreationTimestamp="2026-02-18 19:52:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:52:19.667459154 +0000 UTC m=+1103.249413999" watchObservedRunningTime="2026-02-18 19:52:19.671192375 +0000 UTC m=+1103.253147220" Feb 18 19:52:20 crc kubenswrapper[4932]: I0218 19:52:20.229011 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:20 crc kubenswrapper[4932]: E0218 19:52:20.229634 4932 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:52:20 crc kubenswrapper[4932]: E0218 19:52:20.229749 4932 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:52:20 crc kubenswrapper[4932]: E0218 19:52:20.229803 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift podName:c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5 nodeName:}" failed. No retries permitted until 2026-02-18 19:52:22.229788891 +0000 UTC m=+1105.811743726 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift") pod "swift-storage-0" (UID: "c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5") : configmap "swift-ring-files" not found Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.268743 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:22 crc kubenswrapper[4932]: E0218 19:52:22.268951 4932 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:52:22 crc kubenswrapper[4932]: E0218 19:52:22.269217 4932 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:52:22 crc kubenswrapper[4932]: E0218 19:52:22.269278 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift podName:c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5 nodeName:}" failed. No retries permitted until 2026-02-18 19:52:26.269260791 +0000 UTC m=+1109.851215636 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift") pod "swift-storage-0" (UID: "c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5") : configmap "swift-ring-files" not found Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.340452 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-sq9sk"] Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.341797 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.344564 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.344696 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.344902 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.353227 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-sq9sk"] Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.474305 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-swiftconf\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.474357 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/04953cd9-9de3-46b5-8b86-382b2d2291cd-etc-swift\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.474392 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/04953cd9-9de3-46b5-8b86-382b2d2291cd-ring-data-devices\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.474619 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf5rv\" (UniqueName: \"kubernetes.io/projected/04953cd9-9de3-46b5-8b86-382b2d2291cd-kube-api-access-zf5rv\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.474695 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-dispersionconf\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.474894 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04953cd9-9de3-46b5-8b86-382b2d2291cd-scripts\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.475057 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-combined-ca-bundle\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.576526 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-swiftconf\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.576591 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/04953cd9-9de3-46b5-8b86-382b2d2291cd-etc-swift\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.576634 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/04953cd9-9de3-46b5-8b86-382b2d2291cd-ring-data-devices\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.576666 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf5rv\" (UniqueName: \"kubernetes.io/projected/04953cd9-9de3-46b5-8b86-382b2d2291cd-kube-api-access-zf5rv\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.576692 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-dispersionconf\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.576898 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04953cd9-9de3-46b5-8b86-382b2d2291cd-scripts\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.577097 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/04953cd9-9de3-46b5-8b86-382b2d2291cd-etc-swift\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.577314 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-combined-ca-bundle\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.578558 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/04953cd9-9de3-46b5-8b86-382b2d2291cd-ring-data-devices\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.578615 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04953cd9-9de3-46b5-8b86-382b2d2291cd-scripts\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.584676 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-swiftconf\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.592848 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-combined-ca-bundle\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.592854 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-dispersionconf\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.597279 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf5rv\" (UniqueName: \"kubernetes.io/projected/04953cd9-9de3-46b5-8b86-382b2d2291cd-kube-api-access-zf5rv\") pod \"swift-ring-rebalance-sq9sk\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:22 crc kubenswrapper[4932]: I0218 19:52:22.671749 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:23 crc kubenswrapper[4932]: W0218 19:52:23.130963 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04953cd9_9de3_46b5_8b86_382b2d2291cd.slice/crio-ed4ba0f7587a73b183dcf28620debb7555eadcf63d796c9e3aed7de82b80093c WatchSource:0}: Error finding container ed4ba0f7587a73b183dcf28620debb7555eadcf63d796c9e3aed7de82b80093c: Status 404 returned error can't find the container with id ed4ba0f7587a73b183dcf28620debb7555eadcf63d796c9e3aed7de82b80093c Feb 18 19:52:23 crc kubenswrapper[4932]: I0218 19:52:23.131913 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-sq9sk"] Feb 18 19:52:23 crc kubenswrapper[4932]: I0218 19:52:23.133352 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 18 19:52:23 crc kubenswrapper[4932]: I0218 19:52:23.133376 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 18 19:52:23 crc kubenswrapper[4932]: I0218 19:52:23.683914 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sq9sk" event={"ID":"04953cd9-9de3-46b5-8b86-382b2d2291cd","Type":"ContainerStarted","Data":"ed4ba0f7587a73b183dcf28620debb7555eadcf63d796c9e3aed7de82b80093c"} Feb 18 19:52:24 crc kubenswrapper[4932]: I0218 19:52:24.503421 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 18 19:52:24 crc kubenswrapper[4932]: I0218 19:52:24.503491 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 18 19:52:25 crc kubenswrapper[4932]: I0218 19:52:25.843604 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 18 19:52:25 crc kubenswrapper[4932]: I0218 19:52:25.998793 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 18 19:52:26 crc kubenswrapper[4932]: I0218 19:52:26.365260 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:26 crc kubenswrapper[4932]: E0218 19:52:26.365507 4932 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:52:26 crc kubenswrapper[4932]: E0218 19:52:26.365538 4932 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:52:26 crc kubenswrapper[4932]: E0218 19:52:26.365606 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift podName:c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5 nodeName:}" failed. No retries permitted until 2026-02-18 19:52:34.36558702 +0000 UTC m=+1117.947541875 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift") pod "swift-storage-0" (UID: "c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5") : configmap "swift-ring-files" not found Feb 18 19:52:26 crc kubenswrapper[4932]: E0218 19:52:26.982543 4932 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.190:53834->38.102.83.190:41227: write tcp 38.102.83.190:53834->38.102.83.190:41227: write: broken pipe Feb 18 19:52:27 crc kubenswrapper[4932]: I0218 19:52:27.605585 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:52:27 crc kubenswrapper[4932]: I0218 19:52:27.605646 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:52:27 crc kubenswrapper[4932]: I0218 19:52:27.615908 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:52:27 crc kubenswrapper[4932]: I0218 19:52:27.668783 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-668d7c8657-fkpfr"] Feb 18 19:52:27 crc kubenswrapper[4932]: I0218 19:52:27.678470 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" podUID="7182e8ba-c70f-44ce-b628-21107829cb83" containerName="dnsmasq-dns" containerID="cri-o://0f7421be32fb94af35c77032e6ff053b63a9d0e9743ff1d49b4fdfb654ec47b1" gracePeriod=10 Feb 18 19:52:27 crc kubenswrapper[4932]: I0218 19:52:27.755640 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 18 19:52:27 crc kubenswrapper[4932]: I0218 19:52:27.870792 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 18 19:52:28 crc kubenswrapper[4932]: I0218 19:52:28.734394 4932 generic.go:334] "Generic (PLEG): container finished" podID="7182e8ba-c70f-44ce-b628-21107829cb83" containerID="0f7421be32fb94af35c77032e6ff053b63a9d0e9743ff1d49b4fdfb654ec47b1" exitCode=0 Feb 18 19:52:28 crc kubenswrapper[4932]: I0218 19:52:28.735192 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" event={"ID":"7182e8ba-c70f-44ce-b628-21107829cb83","Type":"ContainerDied","Data":"0f7421be32fb94af35c77032e6ff053b63a9d0e9743ff1d49b4fdfb654ec47b1"} Feb 18 19:52:29 crc kubenswrapper[4932]: E0218 19:52:29.966856 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741" Feb 18 19:52:29 crc kubenswrapper[4932]: E0218 19:52:29.967080 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus,Image:registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741,Command:[],Args:[--config.file=/etc/prometheus/config_out/prometheus.env.yaml --web.enable-lifecycle --web.enable-remote-write-receiver --web.route-prefix=/ --storage.tsdb.retention.time=24h --storage.tsdb.path=/prometheus --web.config.file=/etc/prometheus/web_config/web-config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9090,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-out,ReadOnly:true,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-assets,ReadOnly:true,MountPath:/etc/prometheus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-db,ReadOnly:false,MountPath:/prometheus,SubPath:prometheus-db,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-1,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-2,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/prometheus/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cnvgq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/healthy,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:15,SuccessThreshold:1,FailureThreshold:60,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(cf98dd42-289f-43fa-b4dc-c6ff814a3c25): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:52:30 crc kubenswrapper[4932]: I0218 19:52:30.292504 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" Feb 18 19:52:30 crc kubenswrapper[4932]: I0218 19:52:30.465607 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7182e8ba-c70f-44ce-b628-21107829cb83-config\") pod \"7182e8ba-c70f-44ce-b628-21107829cb83\" (UID: \"7182e8ba-c70f-44ce-b628-21107829cb83\") " Feb 18 19:52:30 crc kubenswrapper[4932]: I0218 19:52:30.465798 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t97ph\" (UniqueName: \"kubernetes.io/projected/7182e8ba-c70f-44ce-b628-21107829cb83-kube-api-access-t97ph\") pod \"7182e8ba-c70f-44ce-b628-21107829cb83\" (UID: \"7182e8ba-c70f-44ce-b628-21107829cb83\") " Feb 18 19:52:30 crc kubenswrapper[4932]: I0218 19:52:30.465866 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7182e8ba-c70f-44ce-b628-21107829cb83-dns-svc\") pod \"7182e8ba-c70f-44ce-b628-21107829cb83\" (UID: \"7182e8ba-c70f-44ce-b628-21107829cb83\") " Feb 18 19:52:30 crc kubenswrapper[4932]: I0218 19:52:30.470932 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7182e8ba-c70f-44ce-b628-21107829cb83-kube-api-access-t97ph" (OuterVolumeSpecName: "kube-api-access-t97ph") pod "7182e8ba-c70f-44ce-b628-21107829cb83" (UID: "7182e8ba-c70f-44ce-b628-21107829cb83"). InnerVolumeSpecName "kube-api-access-t97ph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:30 crc kubenswrapper[4932]: I0218 19:52:30.504670 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7182e8ba-c70f-44ce-b628-21107829cb83-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7182e8ba-c70f-44ce-b628-21107829cb83" (UID: "7182e8ba-c70f-44ce-b628-21107829cb83"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:30 crc kubenswrapper[4932]: I0218 19:52:30.507640 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7182e8ba-c70f-44ce-b628-21107829cb83-config" (OuterVolumeSpecName: "config") pod "7182e8ba-c70f-44ce-b628-21107829cb83" (UID: "7182e8ba-c70f-44ce-b628-21107829cb83"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:30 crc kubenswrapper[4932]: I0218 19:52:30.568440 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t97ph\" (UniqueName: \"kubernetes.io/projected/7182e8ba-c70f-44ce-b628-21107829cb83-kube-api-access-t97ph\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:30 crc kubenswrapper[4932]: I0218 19:52:30.569536 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7182e8ba-c70f-44ce-b628-21107829cb83-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:30 crc kubenswrapper[4932]: I0218 19:52:30.569582 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7182e8ba-c70f-44ce-b628-21107829cb83-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:30 crc kubenswrapper[4932]: I0218 19:52:30.751802 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" event={"ID":"7182e8ba-c70f-44ce-b628-21107829cb83","Type":"ContainerDied","Data":"4397e01bf815b57b3796c200c92c0f185a71fad8907ccac4ea649586543a8255"} Feb 18 19:52:30 crc kubenswrapper[4932]: I0218 19:52:30.751857 4932 scope.go:117] "RemoveContainer" containerID="0f7421be32fb94af35c77032e6ff053b63a9d0e9743ff1d49b4fdfb654ec47b1" Feb 18 19:52:30 crc kubenswrapper[4932]: I0218 19:52:30.751990 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-668d7c8657-fkpfr" Feb 18 19:52:30 crc kubenswrapper[4932]: I0218 19:52:30.789498 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-668d7c8657-fkpfr"] Feb 18 19:52:30 crc kubenswrapper[4932]: I0218 19:52:30.795208 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-668d7c8657-fkpfr"] Feb 18 19:52:31 crc kubenswrapper[4932]: I0218 19:52:31.192225 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7182e8ba-c70f-44ce-b628-21107829cb83" path="/var/lib/kubelet/pods/7182e8ba-c70f-44ce-b628-21107829cb83/volumes" Feb 18 19:52:31 crc kubenswrapper[4932]: I0218 19:52:31.852204 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-9pgp9"] Feb 18 19:52:31 crc kubenswrapper[4932]: E0218 19:52:31.852602 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7182e8ba-c70f-44ce-b628-21107829cb83" containerName="init" Feb 18 19:52:31 crc kubenswrapper[4932]: I0218 19:52:31.852621 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="7182e8ba-c70f-44ce-b628-21107829cb83" containerName="init" Feb 18 19:52:31 crc kubenswrapper[4932]: E0218 19:52:31.852655 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7182e8ba-c70f-44ce-b628-21107829cb83" containerName="dnsmasq-dns" Feb 18 19:52:31 crc kubenswrapper[4932]: I0218 19:52:31.852662 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="7182e8ba-c70f-44ce-b628-21107829cb83" containerName="dnsmasq-dns" Feb 18 19:52:31 crc kubenswrapper[4932]: I0218 19:52:31.852853 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="7182e8ba-c70f-44ce-b628-21107829cb83" containerName="dnsmasq-dns" Feb 18 19:52:31 crc kubenswrapper[4932]: I0218 19:52:31.853557 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9pgp9" Feb 18 19:52:31 crc kubenswrapper[4932]: I0218 19:52:31.856449 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 18 19:52:31 crc kubenswrapper[4932]: I0218 19:52:31.872048 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9pgp9"] Feb 18 19:52:31 crc kubenswrapper[4932]: I0218 19:52:31.892708 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkd5t\" (UniqueName: \"kubernetes.io/projected/3bd41ee5-d385-424f-996a-b3baf7f9eb8a-kube-api-access-rkd5t\") pod \"root-account-create-update-9pgp9\" (UID: \"3bd41ee5-d385-424f-996a-b3baf7f9eb8a\") " pod="openstack/root-account-create-update-9pgp9" Feb 18 19:52:31 crc kubenswrapper[4932]: I0218 19:52:31.892833 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd41ee5-d385-424f-996a-b3baf7f9eb8a-operator-scripts\") pod \"root-account-create-update-9pgp9\" (UID: \"3bd41ee5-d385-424f-996a-b3baf7f9eb8a\") " pod="openstack/root-account-create-update-9pgp9" Feb 18 19:52:31 crc kubenswrapper[4932]: I0218 19:52:31.994610 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkd5t\" (UniqueName: \"kubernetes.io/projected/3bd41ee5-d385-424f-996a-b3baf7f9eb8a-kube-api-access-rkd5t\") pod \"root-account-create-update-9pgp9\" (UID: \"3bd41ee5-d385-424f-996a-b3baf7f9eb8a\") " pod="openstack/root-account-create-update-9pgp9" Feb 18 19:52:31 crc kubenswrapper[4932]: I0218 19:52:31.994704 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd41ee5-d385-424f-996a-b3baf7f9eb8a-operator-scripts\") pod \"root-account-create-update-9pgp9\" (UID: \"3bd41ee5-d385-424f-996a-b3baf7f9eb8a\") " pod="openstack/root-account-create-update-9pgp9" Feb 18 19:52:31 crc kubenswrapper[4932]: I0218 19:52:31.996010 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd41ee5-d385-424f-996a-b3baf7f9eb8a-operator-scripts\") pod \"root-account-create-update-9pgp9\" (UID: \"3bd41ee5-d385-424f-996a-b3baf7f9eb8a\") " pod="openstack/root-account-create-update-9pgp9" Feb 18 19:52:32 crc kubenswrapper[4932]: I0218 19:52:32.017782 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkd5t\" (UniqueName: \"kubernetes.io/projected/3bd41ee5-d385-424f-996a-b3baf7f9eb8a-kube-api-access-rkd5t\") pod \"root-account-create-update-9pgp9\" (UID: \"3bd41ee5-d385-424f-996a-b3baf7f9eb8a\") " pod="openstack/root-account-create-update-9pgp9" Feb 18 19:52:32 crc kubenswrapper[4932]: I0218 19:52:32.051872 4932 scope.go:117] "RemoveContainer" containerID="21d90ef666981de2a2798c5a9811496799691c81e9c63553c393f18b1c049e7d" Feb 18 19:52:32 crc kubenswrapper[4932]: I0218 19:52:32.161922 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 18 19:52:32 crc kubenswrapper[4932]: I0218 19:52:32.192434 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9pgp9" Feb 18 19:52:32 crc kubenswrapper[4932]: I0218 19:52:32.685686 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9pgp9"] Feb 18 19:52:32 crc kubenswrapper[4932]: I0218 19:52:32.770913 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9pgp9" event={"ID":"3bd41ee5-d385-424f-996a-b3baf7f9eb8a","Type":"ContainerStarted","Data":"5eacdf25818da33933d5d61415a3ca0021d992738e8dfeca409d5ccd0c748a39"} Feb 18 19:52:32 crc kubenswrapper[4932]: I0218 19:52:32.773445 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sq9sk" event={"ID":"04953cd9-9de3-46b5-8b86-382b2d2291cd","Type":"ContainerStarted","Data":"3511d7edf13acf4b55c85650c80b80f04682a6a62d5515928313b0d0eefcc028"} Feb 18 19:52:32 crc kubenswrapper[4932]: I0218 19:52:32.799398 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-sq9sk" podStartSLOduration=1.793678818 podStartE2EDuration="10.799377317s" podCreationTimestamp="2026-02-18 19:52:22 +0000 UTC" firstStartedPulling="2026-02-18 19:52:23.134970148 +0000 UTC m=+1106.716925023" lastFinishedPulling="2026-02-18 19:52:32.140668647 +0000 UTC m=+1115.722623522" observedRunningTime="2026-02-18 19:52:32.789859843 +0000 UTC m=+1116.371814698" watchObservedRunningTime="2026-02-18 19:52:32.799377317 +0000 UTC m=+1116.381332172" Feb 18 19:52:33 crc kubenswrapper[4932]: E0218 19:52:33.449310 4932 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bd41ee5_d385_424f_996a_b3baf7f9eb8a.slice/crio-conmon-0c56a84dec06134e2f4b962a1631f1595e0dce10e33a951ccd5303bade9b2a6e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bd41ee5_d385_424f_996a_b3baf7f9eb8a.slice/crio-0c56a84dec06134e2f4b962a1631f1595e0dce10e33a951ccd5303bade9b2a6e.scope\": RecentStats: unable to find data in memory cache]" Feb 18 19:52:33 crc kubenswrapper[4932]: I0218 19:52:33.796381 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cf98dd42-289f-43fa-b4dc-c6ff814a3c25","Type":"ContainerStarted","Data":"66d84470994100b42a53acf4561ffbafa4e810bfb2c143ce053c40ae82620693"} Feb 18 19:52:33 crc kubenswrapper[4932]: I0218 19:52:33.799138 4932 generic.go:334] "Generic (PLEG): container finished" podID="3bd41ee5-d385-424f-996a-b3baf7f9eb8a" containerID="0c56a84dec06134e2f4b962a1631f1595e0dce10e33a951ccd5303bade9b2a6e" exitCode=0 Feb 18 19:52:33 crc kubenswrapper[4932]: I0218 19:52:33.800896 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9pgp9" event={"ID":"3bd41ee5-d385-424f-996a-b3baf7f9eb8a","Type":"ContainerDied","Data":"0c56a84dec06134e2f4b962a1631f1595e0dce10e33a951ccd5303bade9b2a6e"} Feb 18 19:52:34 crc kubenswrapper[4932]: I0218 19:52:34.442599 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:34 crc kubenswrapper[4932]: E0218 19:52:34.442787 4932 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:52:34 crc kubenswrapper[4932]: E0218 19:52:34.443015 4932 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:52:34 crc kubenswrapper[4932]: E0218 19:52:34.443089 4932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift podName:c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5 nodeName:}" failed. No retries permitted until 2026-02-18 19:52:50.443057611 +0000 UTC m=+1134.025012456 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift") pod "swift-storage-0" (UID: "c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5") : configmap "swift-ring-files" not found Feb 18 19:52:34 crc kubenswrapper[4932]: I0218 19:52:34.906278 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-js74w"] Feb 18 19:52:34 crc kubenswrapper[4932]: I0218 19:52:34.908721 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-js74w" Feb 18 19:52:34 crc kubenswrapper[4932]: I0218 19:52:34.915129 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-js74w"] Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.002266 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-5833-account-create-update-fxm2t"] Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.003539 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5833-account-create-update-fxm2t" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.006366 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.011157 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-5833-account-create-update-fxm2t"] Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.054096 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4c8a6a6-4944-4c6f-be98-9dde833b89e5-operator-scripts\") pod \"glance-db-create-js74w\" (UID: \"c4c8a6a6-4944-4c6f-be98-9dde833b89e5\") " pod="openstack/glance-db-create-js74w" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.054332 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd8cd\" (UniqueName: \"kubernetes.io/projected/c4c8a6a6-4944-4c6f-be98-9dde833b89e5-kube-api-access-dd8cd\") pod \"glance-db-create-js74w\" (UID: \"c4c8a6a6-4944-4c6f-be98-9dde833b89e5\") " pod="openstack/glance-db-create-js74w" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.054397 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mkrl\" (UniqueName: \"kubernetes.io/projected/7fa1fef8-5a2e-4518-8641-d4b594fc29a3-kube-api-access-7mkrl\") pod \"glance-5833-account-create-update-fxm2t\" (UID: \"7fa1fef8-5a2e-4518-8641-d4b594fc29a3\") " pod="openstack/glance-5833-account-create-update-fxm2t" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.054448 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fa1fef8-5a2e-4518-8641-d4b594fc29a3-operator-scripts\") pod \"glance-5833-account-create-update-fxm2t\" (UID: \"7fa1fef8-5a2e-4518-8641-d4b594fc29a3\") " pod="openstack/glance-5833-account-create-update-fxm2t" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.081611 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-99qbh" podUID="039d44bb-1ad0-4916-8ef2-3cece4829506" containerName="ovn-controller" probeResult="failure" output=< Feb 18 19:52:35 crc kubenswrapper[4932]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 18 19:52:35 crc kubenswrapper[4932]: > Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.155937 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd8cd\" (UniqueName: \"kubernetes.io/projected/c4c8a6a6-4944-4c6f-be98-9dde833b89e5-kube-api-access-dd8cd\") pod \"glance-db-create-js74w\" (UID: \"c4c8a6a6-4944-4c6f-be98-9dde833b89e5\") " pod="openstack/glance-db-create-js74w" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.156000 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mkrl\" (UniqueName: \"kubernetes.io/projected/7fa1fef8-5a2e-4518-8641-d4b594fc29a3-kube-api-access-7mkrl\") pod \"glance-5833-account-create-update-fxm2t\" (UID: \"7fa1fef8-5a2e-4518-8641-d4b594fc29a3\") " pod="openstack/glance-5833-account-create-update-fxm2t" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.156023 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fa1fef8-5a2e-4518-8641-d4b594fc29a3-operator-scripts\") pod \"glance-5833-account-create-update-fxm2t\" (UID: \"7fa1fef8-5a2e-4518-8641-d4b594fc29a3\") " pod="openstack/glance-5833-account-create-update-fxm2t" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.156101 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4c8a6a6-4944-4c6f-be98-9dde833b89e5-operator-scripts\") pod \"glance-db-create-js74w\" (UID: \"c4c8a6a6-4944-4c6f-be98-9dde833b89e5\") " pod="openstack/glance-db-create-js74w" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.156839 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fa1fef8-5a2e-4518-8641-d4b594fc29a3-operator-scripts\") pod \"glance-5833-account-create-update-fxm2t\" (UID: \"7fa1fef8-5a2e-4518-8641-d4b594fc29a3\") " pod="openstack/glance-5833-account-create-update-fxm2t" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.157390 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4c8a6a6-4944-4c6f-be98-9dde833b89e5-operator-scripts\") pod \"glance-db-create-js74w\" (UID: \"c4c8a6a6-4944-4c6f-be98-9dde833b89e5\") " pod="openstack/glance-db-create-js74w" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.175326 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd8cd\" (UniqueName: \"kubernetes.io/projected/c4c8a6a6-4944-4c6f-be98-9dde833b89e5-kube-api-access-dd8cd\") pod \"glance-db-create-js74w\" (UID: \"c4c8a6a6-4944-4c6f-be98-9dde833b89e5\") " pod="openstack/glance-db-create-js74w" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.175898 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mkrl\" (UniqueName: \"kubernetes.io/projected/7fa1fef8-5a2e-4518-8641-d4b594fc29a3-kube-api-access-7mkrl\") pod \"glance-5833-account-create-update-fxm2t\" (UID: \"7fa1fef8-5a2e-4518-8641-d4b594fc29a3\") " pod="openstack/glance-5833-account-create-update-fxm2t" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.238206 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-js74w" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.323926 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5833-account-create-update-fxm2t" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.716626 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-zhvln"] Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.717628 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zhvln" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.755575 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-zhvln"] Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.765983 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64352a4d-f3af-44e1-b1d7-cc5e125de560-operator-scripts\") pod \"keystone-db-create-zhvln\" (UID: \"64352a4d-f3af-44e1-b1d7-cc5e125de560\") " pod="openstack/keystone-db-create-zhvln" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.766031 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4v6z\" (UniqueName: \"kubernetes.io/projected/64352a4d-f3af-44e1-b1d7-cc5e125de560-kube-api-access-q4v6z\") pod \"keystone-db-create-zhvln\" (UID: \"64352a4d-f3af-44e1-b1d7-cc5e125de560\") " pod="openstack/keystone-db-create-zhvln" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.835582 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bd21-account-create-update-kcn9v"] Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.837325 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bd21-account-create-update-kcn9v" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.844830 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.847344 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9pgp9" event={"ID":"3bd41ee5-d385-424f-996a-b3baf7f9eb8a","Type":"ContainerDied","Data":"5eacdf25818da33933d5d61415a3ca0021d992738e8dfeca409d5ccd0c748a39"} Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.847382 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5eacdf25818da33933d5d61415a3ca0021d992738e8dfeca409d5ccd0c748a39" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.847710 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bd21-account-create-update-kcn9v"] Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.868081 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64352a4d-f3af-44e1-b1d7-cc5e125de560-operator-scripts\") pod \"keystone-db-create-zhvln\" (UID: \"64352a4d-f3af-44e1-b1d7-cc5e125de560\") " pod="openstack/keystone-db-create-zhvln" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.868130 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4v6z\" (UniqueName: \"kubernetes.io/projected/64352a4d-f3af-44e1-b1d7-cc5e125de560-kube-api-access-q4v6z\") pod \"keystone-db-create-zhvln\" (UID: \"64352a4d-f3af-44e1-b1d7-cc5e125de560\") " pod="openstack/keystone-db-create-zhvln" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.869073 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64352a4d-f3af-44e1-b1d7-cc5e125de560-operator-scripts\") pod \"keystone-db-create-zhvln\" (UID: \"64352a4d-f3af-44e1-b1d7-cc5e125de560\") " pod="openstack/keystone-db-create-zhvln" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.887336 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4v6z\" (UniqueName: \"kubernetes.io/projected/64352a4d-f3af-44e1-b1d7-cc5e125de560-kube-api-access-q4v6z\") pod \"keystone-db-create-zhvln\" (UID: \"64352a4d-f3af-44e1-b1d7-cc5e125de560\") " pod="openstack/keystone-db-create-zhvln" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.943670 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9pgp9" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.960793 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-rw8qr"] Feb 18 19:52:35 crc kubenswrapper[4932]: E0218 19:52:35.961256 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd41ee5-d385-424f-996a-b3baf7f9eb8a" containerName="mariadb-account-create-update" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.961275 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd41ee5-d385-424f-996a-b3baf7f9eb8a" containerName="mariadb-account-create-update" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.961438 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bd41ee5-d385-424f-996a-b3baf7f9eb8a" containerName="mariadb-account-create-update" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.961995 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rw8qr" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.970375 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd41ee5-d385-424f-996a-b3baf7f9eb8a-operator-scripts\") pod \"3bd41ee5-d385-424f-996a-b3baf7f9eb8a\" (UID: \"3bd41ee5-d385-424f-996a-b3baf7f9eb8a\") " Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.970460 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkd5t\" (UniqueName: \"kubernetes.io/projected/3bd41ee5-d385-424f-996a-b3baf7f9eb8a-kube-api-access-rkd5t\") pod \"3bd41ee5-d385-424f-996a-b3baf7f9eb8a\" (UID: \"3bd41ee5-d385-424f-996a-b3baf7f9eb8a\") " Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.970663 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk24q\" (UniqueName: \"kubernetes.io/projected/56349fdd-8b87-4910-b182-555b5913d5ee-kube-api-access-jk24q\") pod \"keystone-bd21-account-create-update-kcn9v\" (UID: \"56349fdd-8b87-4910-b182-555b5913d5ee\") " pod="openstack/keystone-bd21-account-create-update-kcn9v" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.970754 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56349fdd-8b87-4910-b182-555b5913d5ee-operator-scripts\") pod \"keystone-bd21-account-create-update-kcn9v\" (UID: \"56349fdd-8b87-4910-b182-555b5913d5ee\") " pod="openstack/keystone-bd21-account-create-update-kcn9v" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.970813 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vggbn\" (UniqueName: \"kubernetes.io/projected/26bd1cb1-1dcb-460e-ba19-eb8bef1951b5-kube-api-access-vggbn\") pod \"placement-db-create-rw8qr\" (UID: \"26bd1cb1-1dcb-460e-ba19-eb8bef1951b5\") " pod="openstack/placement-db-create-rw8qr" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.970849 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26bd1cb1-1dcb-460e-ba19-eb8bef1951b5-operator-scripts\") pod \"placement-db-create-rw8qr\" (UID: \"26bd1cb1-1dcb-460e-ba19-eb8bef1951b5\") " pod="openstack/placement-db-create-rw8qr" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.971763 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bd41ee5-d385-424f-996a-b3baf7f9eb8a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3bd41ee5-d385-424f-996a-b3baf7f9eb8a" (UID: "3bd41ee5-d385-424f-996a-b3baf7f9eb8a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.974814 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bd41ee5-d385-424f-996a-b3baf7f9eb8a-kube-api-access-rkd5t" (OuterVolumeSpecName: "kube-api-access-rkd5t") pod "3bd41ee5-d385-424f-996a-b3baf7f9eb8a" (UID: "3bd41ee5-d385-424f-996a-b3baf7f9eb8a"). InnerVolumeSpecName "kube-api-access-rkd5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:35 crc kubenswrapper[4932]: I0218 19:52:35.976132 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-rw8qr"] Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.038749 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zhvln" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.067767 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-e952-account-create-update-jjrs6"] Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.071886 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e952-account-create-update-jjrs6" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.072066 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56349fdd-8b87-4910-b182-555b5913d5ee-operator-scripts\") pod \"keystone-bd21-account-create-update-kcn9v\" (UID: \"56349fdd-8b87-4910-b182-555b5913d5ee\") " pod="openstack/keystone-bd21-account-create-update-kcn9v" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.072103 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vggbn\" (UniqueName: \"kubernetes.io/projected/26bd1cb1-1dcb-460e-ba19-eb8bef1951b5-kube-api-access-vggbn\") pod \"placement-db-create-rw8qr\" (UID: \"26bd1cb1-1dcb-460e-ba19-eb8bef1951b5\") " pod="openstack/placement-db-create-rw8qr" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.072136 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26bd1cb1-1dcb-460e-ba19-eb8bef1951b5-operator-scripts\") pod \"placement-db-create-rw8qr\" (UID: \"26bd1cb1-1dcb-460e-ba19-eb8bef1951b5\") " pod="openstack/placement-db-create-rw8qr" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.072205 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk24q\" (UniqueName: \"kubernetes.io/projected/56349fdd-8b87-4910-b182-555b5913d5ee-kube-api-access-jk24q\") pod \"keystone-bd21-account-create-update-kcn9v\" (UID: \"56349fdd-8b87-4910-b182-555b5913d5ee\") " pod="openstack/keystone-bd21-account-create-update-kcn9v" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.072279 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd41ee5-d385-424f-996a-b3baf7f9eb8a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.072290 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkd5t\" (UniqueName: \"kubernetes.io/projected/3bd41ee5-d385-424f-996a-b3baf7f9eb8a-kube-api-access-rkd5t\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.073219 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56349fdd-8b87-4910-b182-555b5913d5ee-operator-scripts\") pod \"keystone-bd21-account-create-update-kcn9v\" (UID: \"56349fdd-8b87-4910-b182-555b5913d5ee\") " pod="openstack/keystone-bd21-account-create-update-kcn9v" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.073225 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26bd1cb1-1dcb-460e-ba19-eb8bef1951b5-operator-scripts\") pod \"placement-db-create-rw8qr\" (UID: \"26bd1cb1-1dcb-460e-ba19-eb8bef1951b5\") " pod="openstack/placement-db-create-rw8qr" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.075162 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.082507 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e952-account-create-update-jjrs6"] Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.096524 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk24q\" (UniqueName: \"kubernetes.io/projected/56349fdd-8b87-4910-b182-555b5913d5ee-kube-api-access-jk24q\") pod \"keystone-bd21-account-create-update-kcn9v\" (UID: \"56349fdd-8b87-4910-b182-555b5913d5ee\") " pod="openstack/keystone-bd21-account-create-update-kcn9v" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.099080 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vggbn\" (UniqueName: \"kubernetes.io/projected/26bd1cb1-1dcb-460e-ba19-eb8bef1951b5-kube-api-access-vggbn\") pod \"placement-db-create-rw8qr\" (UID: \"26bd1cb1-1dcb-460e-ba19-eb8bef1951b5\") " pod="openstack/placement-db-create-rw8qr" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.176623 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7g9z\" (UniqueName: \"kubernetes.io/projected/35590261-332c-47e0-89e9-4eef3fd36086-kube-api-access-c7g9z\") pod \"placement-e952-account-create-update-jjrs6\" (UID: \"35590261-332c-47e0-89e9-4eef3fd36086\") " pod="openstack/placement-e952-account-create-update-jjrs6" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.177020 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35590261-332c-47e0-89e9-4eef3fd36086-operator-scripts\") pod \"placement-e952-account-create-update-jjrs6\" (UID: \"35590261-332c-47e0-89e9-4eef3fd36086\") " pod="openstack/placement-e952-account-create-update-jjrs6" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.250094 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bd21-account-create-update-kcn9v" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.279393 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7g9z\" (UniqueName: \"kubernetes.io/projected/35590261-332c-47e0-89e9-4eef3fd36086-kube-api-access-c7g9z\") pod \"placement-e952-account-create-update-jjrs6\" (UID: \"35590261-332c-47e0-89e9-4eef3fd36086\") " pod="openstack/placement-e952-account-create-update-jjrs6" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.279548 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35590261-332c-47e0-89e9-4eef3fd36086-operator-scripts\") pod \"placement-e952-account-create-update-jjrs6\" (UID: \"35590261-332c-47e0-89e9-4eef3fd36086\") " pod="openstack/placement-e952-account-create-update-jjrs6" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.281859 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rw8qr" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.291641 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-5833-account-create-update-fxm2t"] Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.291936 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35590261-332c-47e0-89e9-4eef3fd36086-operator-scripts\") pod \"placement-e952-account-create-update-jjrs6\" (UID: \"35590261-332c-47e0-89e9-4eef3fd36086\") " pod="openstack/placement-e952-account-create-update-jjrs6" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.308502 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7g9z\" (UniqueName: \"kubernetes.io/projected/35590261-332c-47e0-89e9-4eef3fd36086-kube-api-access-c7g9z\") pod \"placement-e952-account-create-update-jjrs6\" (UID: \"35590261-332c-47e0-89e9-4eef3fd36086\") " pod="openstack/placement-e952-account-create-update-jjrs6" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.395287 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e952-account-create-update-jjrs6" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.401298 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-js74w"] Feb 18 19:52:36 crc kubenswrapper[4932]: W0218 19:52:36.408603 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4c8a6a6_4944_4c6f_be98_9dde833b89e5.slice/crio-a795285cf0cb2a54f1126955d37a9b6d8e276565b9e0d79ceb7a3f9ba32bad9b WatchSource:0}: Error finding container a795285cf0cb2a54f1126955d37a9b6d8e276565b9e0d79ceb7a3f9ba32bad9b: Status 404 returned error can't find the container with id a795285cf0cb2a54f1126955d37a9b6d8e276565b9e0d79ceb7a3f9ba32bad9b Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.523520 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-zhvln"] Feb 18 19:52:36 crc kubenswrapper[4932]: W0218 19:52:36.566358 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64352a4d_f3af_44e1_b1d7_cc5e125de560.slice/crio-e989b7ee4c814fc4ee53473b2356b223211c09b3f0c143affed0c93ec3ad0f14 WatchSource:0}: Error finding container e989b7ee4c814fc4ee53473b2356b223211c09b3f0c143affed0c93ec3ad0f14: Status 404 returned error can't find the container with id e989b7ee4c814fc4ee53473b2356b223211c09b3f0c143affed0c93ec3ad0f14 Feb 18 19:52:36 crc kubenswrapper[4932]: E0218 19:52:36.706421 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.730939 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bd21-account-create-update-kcn9v"] Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.856209 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cf98dd42-289f-43fa-b4dc-c6ff814a3c25","Type":"ContainerStarted","Data":"1898dd90c7cb5f44526cee3dcba285d60ab2aa3db3c6ae91c6ffaee8a1e5c768"} Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.857167 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bd21-account-create-update-kcn9v" event={"ID":"56349fdd-8b87-4910-b182-555b5913d5ee","Type":"ContainerStarted","Data":"f9bb2d66e25bd07650a62fc4ddaa3bf964c84c4c8996178f6cc499147ca25363"} Feb 18 19:52:36 crc kubenswrapper[4932]: E0218 19:52:36.858247 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.859865 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zhvln" event={"ID":"64352a4d-f3af-44e1-b1d7-cc5e125de560","Type":"ContainerStarted","Data":"2cfcad461c33bcb694d12209c0cb7b72420cbc06fd09263f1f26b50ea451f974"} Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.859921 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zhvln" event={"ID":"64352a4d-f3af-44e1-b1d7-cc5e125de560","Type":"ContainerStarted","Data":"e989b7ee4c814fc4ee53473b2356b223211c09b3f0c143affed0c93ec3ad0f14"} Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.862194 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5833-account-create-update-fxm2t" event={"ID":"7fa1fef8-5a2e-4518-8641-d4b594fc29a3","Type":"ContainerStarted","Data":"3ce1a237abcba8eb5dacdaaf6767d6692224b8089fbea09e0b1408de503e1b1a"} Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.862228 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5833-account-create-update-fxm2t" event={"ID":"7fa1fef8-5a2e-4518-8641-d4b594fc29a3","Type":"ContainerStarted","Data":"ce3621b623070bf468ed09862ce254a24da7ba911cfd39df905bb1ca3d03fb1e"} Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.864375 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9pgp9" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.864814 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-js74w" event={"ID":"c4c8a6a6-4944-4c6f-be98-9dde833b89e5","Type":"ContainerStarted","Data":"eeb81a13449459a4c7d2237c075a2110a61a815c3e8cc4a439843e5121373f28"} Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.864871 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-js74w" event={"ID":"c4c8a6a6-4944-4c6f-be98-9dde833b89e5","Type":"ContainerStarted","Data":"a795285cf0cb2a54f1126955d37a9b6d8e276565b9e0d79ceb7a3f9ba32bad9b"} Feb 18 19:52:36 crc kubenswrapper[4932]: W0218 19:52:36.925207 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35590261_332c_47e0_89e9_4eef3fd36086.slice/crio-a57a4488cd2456a40afe2cc7b60575c26f106f5d96e0170780f8f565f70f3047 WatchSource:0}: Error finding container a57a4488cd2456a40afe2cc7b60575c26f106f5d96e0170780f8f565f70f3047: Status 404 returned error can't find the container with id a57a4488cd2456a40afe2cc7b60575c26f106f5d96e0170780f8f565f70f3047 Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.926945 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-rw8qr"] Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.928846 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-zhvln" podStartSLOduration=1.928829511 podStartE2EDuration="1.928829511s" podCreationTimestamp="2026-02-18 19:52:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:52:36.899033187 +0000 UTC m=+1120.480988032" watchObservedRunningTime="2026-02-18 19:52:36.928829511 +0000 UTC m=+1120.510784356" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.953487 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-5833-account-create-update-fxm2t" podStartSLOduration=2.953469677 podStartE2EDuration="2.953469677s" podCreationTimestamp="2026-02-18 19:52:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:52:36.916400744 +0000 UTC m=+1120.498355589" watchObservedRunningTime="2026-02-18 19:52:36.953469677 +0000 UTC m=+1120.535424522" Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.954008 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e952-account-create-update-jjrs6"] Feb 18 19:52:36 crc kubenswrapper[4932]: I0218 19:52:36.965477 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-js74w" podStartSLOduration=2.965458032 podStartE2EDuration="2.965458032s" podCreationTimestamp="2026-02-18 19:52:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:52:36.932706166 +0000 UTC m=+1120.514661011" watchObservedRunningTime="2026-02-18 19:52:36.965458032 +0000 UTC m=+1120.547412877" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.286566 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-vtbzd"] Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.287809 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-vtbzd" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.319020 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-vtbzd"] Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.407814 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02bb1c31-7377-432f-8434-72981200f1ac-operator-scripts\") pod \"watcher-db-create-vtbzd\" (UID: \"02bb1c31-7377-432f-8434-72981200f1ac\") " pod="openstack/watcher-db-create-vtbzd" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.407896 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzgsw\" (UniqueName: \"kubernetes.io/projected/02bb1c31-7377-432f-8434-72981200f1ac-kube-api-access-dzgsw\") pod \"watcher-db-create-vtbzd\" (UID: \"02bb1c31-7377-432f-8434-72981200f1ac\") " pod="openstack/watcher-db-create-vtbzd" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.468126 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-734d-account-create-update-stk6x"] Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.469860 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-734d-account-create-update-stk6x" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.471829 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.478626 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-734d-account-create-update-stk6x"] Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.509825 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02bb1c31-7377-432f-8434-72981200f1ac-operator-scripts\") pod \"watcher-db-create-vtbzd\" (UID: \"02bb1c31-7377-432f-8434-72981200f1ac\") " pod="openstack/watcher-db-create-vtbzd" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.509890 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzgsw\" (UniqueName: \"kubernetes.io/projected/02bb1c31-7377-432f-8434-72981200f1ac-kube-api-access-dzgsw\") pod \"watcher-db-create-vtbzd\" (UID: \"02bb1c31-7377-432f-8434-72981200f1ac\") " pod="openstack/watcher-db-create-vtbzd" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.510736 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02bb1c31-7377-432f-8434-72981200f1ac-operator-scripts\") pod \"watcher-db-create-vtbzd\" (UID: \"02bb1c31-7377-432f-8434-72981200f1ac\") " pod="openstack/watcher-db-create-vtbzd" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.540100 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzgsw\" (UniqueName: \"kubernetes.io/projected/02bb1c31-7377-432f-8434-72981200f1ac-kube-api-access-dzgsw\") pod \"watcher-db-create-vtbzd\" (UID: \"02bb1c31-7377-432f-8434-72981200f1ac\") " pod="openstack/watcher-db-create-vtbzd" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.611960 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hk7r\" (UniqueName: \"kubernetes.io/projected/bec590bc-e2ef-49e0-80be-27af6f69aa06-kube-api-access-4hk7r\") pod \"watcher-734d-account-create-update-stk6x\" (UID: \"bec590bc-e2ef-49e0-80be-27af6f69aa06\") " pod="openstack/watcher-734d-account-create-update-stk6x" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.612044 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bec590bc-e2ef-49e0-80be-27af6f69aa06-operator-scripts\") pod \"watcher-734d-account-create-update-stk6x\" (UID: \"bec590bc-e2ef-49e0-80be-27af6f69aa06\") " pod="openstack/watcher-734d-account-create-update-stk6x" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.620676 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-vtbzd" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.713210 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bec590bc-e2ef-49e0-80be-27af6f69aa06-operator-scripts\") pod \"watcher-734d-account-create-update-stk6x\" (UID: \"bec590bc-e2ef-49e0-80be-27af6f69aa06\") " pod="openstack/watcher-734d-account-create-update-stk6x" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.714018 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hk7r\" (UniqueName: \"kubernetes.io/projected/bec590bc-e2ef-49e0-80be-27af6f69aa06-kube-api-access-4hk7r\") pod \"watcher-734d-account-create-update-stk6x\" (UID: \"bec590bc-e2ef-49e0-80be-27af6f69aa06\") " pod="openstack/watcher-734d-account-create-update-stk6x" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.714602 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bec590bc-e2ef-49e0-80be-27af6f69aa06-operator-scripts\") pod \"watcher-734d-account-create-update-stk6x\" (UID: \"bec590bc-e2ef-49e0-80be-27af6f69aa06\") " pod="openstack/watcher-734d-account-create-update-stk6x" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.731377 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hk7r\" (UniqueName: \"kubernetes.io/projected/bec590bc-e2ef-49e0-80be-27af6f69aa06-kube-api-access-4hk7r\") pod \"watcher-734d-account-create-update-stk6x\" (UID: \"bec590bc-e2ef-49e0-80be-27af6f69aa06\") " pod="openstack/watcher-734d-account-create-update-stk6x" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.785269 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-734d-account-create-update-stk6x" Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.907959 4932 generic.go:334] "Generic (PLEG): container finished" podID="35590261-332c-47e0-89e9-4eef3fd36086" containerID="3b567de8b4f1ae33989815fad19a6d8b9f69d7df099f4fd8ff235740848c1cc0" exitCode=0 Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.908023 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e952-account-create-update-jjrs6" event={"ID":"35590261-332c-47e0-89e9-4eef3fd36086","Type":"ContainerDied","Data":"3b567de8b4f1ae33989815fad19a6d8b9f69d7df099f4fd8ff235740848c1cc0"} Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.908047 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e952-account-create-update-jjrs6" event={"ID":"35590261-332c-47e0-89e9-4eef3fd36086","Type":"ContainerStarted","Data":"a57a4488cd2456a40afe2cc7b60575c26f106f5d96e0170780f8f565f70f3047"} Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.911752 4932 generic.go:334] "Generic (PLEG): container finished" podID="56349fdd-8b87-4910-b182-555b5913d5ee" containerID="5e6b5516d234b57d2f859d33d51d54c0aee524d02399dad696a4642cf7cceb8a" exitCode=0 Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.911856 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bd21-account-create-update-kcn9v" event={"ID":"56349fdd-8b87-4910-b182-555b5913d5ee","Type":"ContainerDied","Data":"5e6b5516d234b57d2f859d33d51d54c0aee524d02399dad696a4642cf7cceb8a"} Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.914835 4932 generic.go:334] "Generic (PLEG): container finished" podID="26bd1cb1-1dcb-460e-ba19-eb8bef1951b5" containerID="439c6cd70d2e38e21f55a810c1fb66ab1e1dc66541977f85b2ca4f91d6caf61b" exitCode=0 Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.914890 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rw8qr" event={"ID":"26bd1cb1-1dcb-460e-ba19-eb8bef1951b5","Type":"ContainerDied","Data":"439c6cd70d2e38e21f55a810c1fb66ab1e1dc66541977f85b2ca4f91d6caf61b"} Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.914909 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rw8qr" event={"ID":"26bd1cb1-1dcb-460e-ba19-eb8bef1951b5","Type":"ContainerStarted","Data":"1d674917a20214787e1b8129748fdeaa37c9d2e1ee0acfb9283d23f2c9010653"} Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.917741 4932 generic.go:334] "Generic (PLEG): container finished" podID="64352a4d-f3af-44e1-b1d7-cc5e125de560" containerID="2cfcad461c33bcb694d12209c0cb7b72420cbc06fd09263f1f26b50ea451f974" exitCode=0 Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.917896 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zhvln" event={"ID":"64352a4d-f3af-44e1-b1d7-cc5e125de560","Type":"ContainerDied","Data":"2cfcad461c33bcb694d12209c0cb7b72420cbc06fd09263f1f26b50ea451f974"} Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.920960 4932 generic.go:334] "Generic (PLEG): container finished" podID="7fa1fef8-5a2e-4518-8641-d4b594fc29a3" containerID="3ce1a237abcba8eb5dacdaaf6767d6692224b8089fbea09e0b1408de503e1b1a" exitCode=0 Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.921004 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5833-account-create-update-fxm2t" event={"ID":"7fa1fef8-5a2e-4518-8641-d4b594fc29a3","Type":"ContainerDied","Data":"3ce1a237abcba8eb5dacdaaf6767d6692224b8089fbea09e0b1408de503e1b1a"} Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.926950 4932 generic.go:334] "Generic (PLEG): container finished" podID="c4c8a6a6-4944-4c6f-be98-9dde833b89e5" containerID="eeb81a13449459a4c7d2237c075a2110a61a815c3e8cc4a439843e5121373f28" exitCode=0 Feb 18 19:52:37 crc kubenswrapper[4932]: I0218 19:52:37.927331 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-js74w" event={"ID":"c4c8a6a6-4944-4c6f-be98-9dde833b89e5","Type":"ContainerDied","Data":"eeb81a13449459a4c7d2237c075a2110a61a815c3e8cc4a439843e5121373f28"} Feb 18 19:52:37 crc kubenswrapper[4932]: E0218 19:52:37.932348 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" Feb 18 19:52:38 crc kubenswrapper[4932]: W0218 19:52:38.090507 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02bb1c31_7377_432f_8434_72981200f1ac.slice/crio-958a01ac916323c31c5c47742739c2ae448f55f9d886b9c1893b8e6c38c03bbf WatchSource:0}: Error finding container 958a01ac916323c31c5c47742739c2ae448f55f9d886b9c1893b8e6c38c03bbf: Status 404 returned error can't find the container with id 958a01ac916323c31c5c47742739c2ae448f55f9d886b9c1893b8e6c38c03bbf Feb 18 19:52:38 crc kubenswrapper[4932]: I0218 19:52:38.102448 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-vtbzd"] Feb 18 19:52:38 crc kubenswrapper[4932]: I0218 19:52:38.240679 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9pgp9"] Feb 18 19:52:38 crc kubenswrapper[4932]: I0218 19:52:38.249236 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-9pgp9"] Feb 18 19:52:38 crc kubenswrapper[4932]: I0218 19:52:38.262284 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-734d-account-create-update-stk6x"] Feb 18 19:52:38 crc kubenswrapper[4932]: I0218 19:52:38.938705 4932 generic.go:334] "Generic (PLEG): container finished" podID="bec590bc-e2ef-49e0-80be-27af6f69aa06" containerID="3a33312c61bc35aede7b854947f5cacef494c07faca9fd46ae2f217a195bc457" exitCode=0 Feb 18 19:52:38 crc kubenswrapper[4932]: I0218 19:52:38.938961 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-734d-account-create-update-stk6x" event={"ID":"bec590bc-e2ef-49e0-80be-27af6f69aa06","Type":"ContainerDied","Data":"3a33312c61bc35aede7b854947f5cacef494c07faca9fd46ae2f217a195bc457"} Feb 18 19:52:38 crc kubenswrapper[4932]: I0218 19:52:38.938985 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-734d-account-create-update-stk6x" event={"ID":"bec590bc-e2ef-49e0-80be-27af6f69aa06","Type":"ContainerStarted","Data":"9bf0c8b6e14124204af0268e3540567cb7b036f9d1ead456934ebd8e07330a8e"} Feb 18 19:52:38 crc kubenswrapper[4932]: I0218 19:52:38.940805 4932 generic.go:334] "Generic (PLEG): container finished" podID="02bb1c31-7377-432f-8434-72981200f1ac" containerID="95e00440e590eb387c9cf8e2e2f9778a04bbe9e0e014879d57139cdcea3fd2d4" exitCode=0 Feb 18 19:52:38 crc kubenswrapper[4932]: I0218 19:52:38.940858 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-vtbzd" event={"ID":"02bb1c31-7377-432f-8434-72981200f1ac","Type":"ContainerDied","Data":"95e00440e590eb387c9cf8e2e2f9778a04bbe9e0e014879d57139cdcea3fd2d4"} Feb 18 19:52:38 crc kubenswrapper[4932]: I0218 19:52:38.940874 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-vtbzd" event={"ID":"02bb1c31-7377-432f-8434-72981200f1ac","Type":"ContainerStarted","Data":"958a01ac916323c31c5c47742739c2ae448f55f9d886b9c1893b8e6c38c03bbf"} Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.190985 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bd41ee5-d385-424f-996a-b3baf7f9eb8a" path="/var/lib/kubelet/pods/3bd41ee5-d385-424f-996a-b3baf7f9eb8a/volumes" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.332033 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5833-account-create-update-fxm2t" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.454289 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fa1fef8-5a2e-4518-8641-d4b594fc29a3-operator-scripts\") pod \"7fa1fef8-5a2e-4518-8641-d4b594fc29a3\" (UID: \"7fa1fef8-5a2e-4518-8641-d4b594fc29a3\") " Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.454440 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mkrl\" (UniqueName: \"kubernetes.io/projected/7fa1fef8-5a2e-4518-8641-d4b594fc29a3-kube-api-access-7mkrl\") pod \"7fa1fef8-5a2e-4518-8641-d4b594fc29a3\" (UID: \"7fa1fef8-5a2e-4518-8641-d4b594fc29a3\") " Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.456827 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fa1fef8-5a2e-4518-8641-d4b594fc29a3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7fa1fef8-5a2e-4518-8641-d4b594fc29a3" (UID: "7fa1fef8-5a2e-4518-8641-d4b594fc29a3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.470494 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fa1fef8-5a2e-4518-8641-d4b594fc29a3-kube-api-access-7mkrl" (OuterVolumeSpecName: "kube-api-access-7mkrl") pod "7fa1fef8-5a2e-4518-8641-d4b594fc29a3" (UID: "7fa1fef8-5a2e-4518-8641-d4b594fc29a3"). InnerVolumeSpecName "kube-api-access-7mkrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.557010 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fa1fef8-5a2e-4518-8641-d4b594fc29a3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.557047 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mkrl\" (UniqueName: \"kubernetes.io/projected/7fa1fef8-5a2e-4518-8641-d4b594fc29a3-kube-api-access-7mkrl\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.660456 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-js74w" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.667403 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e952-account-create-update-jjrs6" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.672216 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zhvln" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.684449 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rw8qr" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.695137 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bd21-account-create-update-kcn9v" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.758889 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dd8cd\" (UniqueName: \"kubernetes.io/projected/c4c8a6a6-4944-4c6f-be98-9dde833b89e5-kube-api-access-dd8cd\") pod \"c4c8a6a6-4944-4c6f-be98-9dde833b89e5\" (UID: \"c4c8a6a6-4944-4c6f-be98-9dde833b89e5\") " Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.759025 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4c8a6a6-4944-4c6f-be98-9dde833b89e5-operator-scripts\") pod \"c4c8a6a6-4944-4c6f-be98-9dde833b89e5\" (UID: \"c4c8a6a6-4944-4c6f-be98-9dde833b89e5\") " Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.759570 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4c8a6a6-4944-4c6f-be98-9dde833b89e5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c4c8a6a6-4944-4c6f-be98-9dde833b89e5" (UID: "c4c8a6a6-4944-4c6f-be98-9dde833b89e5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.763256 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4c8a6a6-4944-4c6f-be98-9dde833b89e5-kube-api-access-dd8cd" (OuterVolumeSpecName: "kube-api-access-dd8cd") pod "c4c8a6a6-4944-4c6f-be98-9dde833b89e5" (UID: "c4c8a6a6-4944-4c6f-be98-9dde833b89e5"). InnerVolumeSpecName "kube-api-access-dd8cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.860433 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vggbn\" (UniqueName: \"kubernetes.io/projected/26bd1cb1-1dcb-460e-ba19-eb8bef1951b5-kube-api-access-vggbn\") pod \"26bd1cb1-1dcb-460e-ba19-eb8bef1951b5\" (UID: \"26bd1cb1-1dcb-460e-ba19-eb8bef1951b5\") " Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.860510 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7g9z\" (UniqueName: \"kubernetes.io/projected/35590261-332c-47e0-89e9-4eef3fd36086-kube-api-access-c7g9z\") pod \"35590261-332c-47e0-89e9-4eef3fd36086\" (UID: \"35590261-332c-47e0-89e9-4eef3fd36086\") " Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.860556 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35590261-332c-47e0-89e9-4eef3fd36086-operator-scripts\") pod \"35590261-332c-47e0-89e9-4eef3fd36086\" (UID: \"35590261-332c-47e0-89e9-4eef3fd36086\") " Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.860590 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk24q\" (UniqueName: \"kubernetes.io/projected/56349fdd-8b87-4910-b182-555b5913d5ee-kube-api-access-jk24q\") pod \"56349fdd-8b87-4910-b182-555b5913d5ee\" (UID: \"56349fdd-8b87-4910-b182-555b5913d5ee\") " Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.860628 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26bd1cb1-1dcb-460e-ba19-eb8bef1951b5-operator-scripts\") pod \"26bd1cb1-1dcb-460e-ba19-eb8bef1951b5\" (UID: \"26bd1cb1-1dcb-460e-ba19-eb8bef1951b5\") " Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.860652 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64352a4d-f3af-44e1-b1d7-cc5e125de560-operator-scripts\") pod \"64352a4d-f3af-44e1-b1d7-cc5e125de560\" (UID: \"64352a4d-f3af-44e1-b1d7-cc5e125de560\") " Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.860690 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56349fdd-8b87-4910-b182-555b5913d5ee-operator-scripts\") pod \"56349fdd-8b87-4910-b182-555b5913d5ee\" (UID: \"56349fdd-8b87-4910-b182-555b5913d5ee\") " Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.860716 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4v6z\" (UniqueName: \"kubernetes.io/projected/64352a4d-f3af-44e1-b1d7-cc5e125de560-kube-api-access-q4v6z\") pod \"64352a4d-f3af-44e1-b1d7-cc5e125de560\" (UID: \"64352a4d-f3af-44e1-b1d7-cc5e125de560\") " Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.861045 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35590261-332c-47e0-89e9-4eef3fd36086-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "35590261-332c-47e0-89e9-4eef3fd36086" (UID: "35590261-332c-47e0-89e9-4eef3fd36086"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.861534 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64352a4d-f3af-44e1-b1d7-cc5e125de560-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "64352a4d-f3af-44e1-b1d7-cc5e125de560" (UID: "64352a4d-f3af-44e1-b1d7-cc5e125de560"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.861659 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dd8cd\" (UniqueName: \"kubernetes.io/projected/c4c8a6a6-4944-4c6f-be98-9dde833b89e5-kube-api-access-dd8cd\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.861685 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35590261-332c-47e0-89e9-4eef3fd36086-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.861698 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64352a4d-f3af-44e1-b1d7-cc5e125de560-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.861710 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4c8a6a6-4944-4c6f-be98-9dde833b89e5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.861895 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26bd1cb1-1dcb-460e-ba19-eb8bef1951b5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26bd1cb1-1dcb-460e-ba19-eb8bef1951b5" (UID: "26bd1cb1-1dcb-460e-ba19-eb8bef1951b5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.861933 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56349fdd-8b87-4910-b182-555b5913d5ee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "56349fdd-8b87-4910-b182-555b5913d5ee" (UID: "56349fdd-8b87-4910-b182-555b5913d5ee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.863824 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56349fdd-8b87-4910-b182-555b5913d5ee-kube-api-access-jk24q" (OuterVolumeSpecName: "kube-api-access-jk24q") pod "56349fdd-8b87-4910-b182-555b5913d5ee" (UID: "56349fdd-8b87-4910-b182-555b5913d5ee"). InnerVolumeSpecName "kube-api-access-jk24q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.864297 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35590261-332c-47e0-89e9-4eef3fd36086-kube-api-access-c7g9z" (OuterVolumeSpecName: "kube-api-access-c7g9z") pod "35590261-332c-47e0-89e9-4eef3fd36086" (UID: "35590261-332c-47e0-89e9-4eef3fd36086"). InnerVolumeSpecName "kube-api-access-c7g9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.864959 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64352a4d-f3af-44e1-b1d7-cc5e125de560-kube-api-access-q4v6z" (OuterVolumeSpecName: "kube-api-access-q4v6z") pod "64352a4d-f3af-44e1-b1d7-cc5e125de560" (UID: "64352a4d-f3af-44e1-b1d7-cc5e125de560"). InnerVolumeSpecName "kube-api-access-q4v6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.865002 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26bd1cb1-1dcb-460e-ba19-eb8bef1951b5-kube-api-access-vggbn" (OuterVolumeSpecName: "kube-api-access-vggbn") pod "26bd1cb1-1dcb-460e-ba19-eb8bef1951b5" (UID: "26bd1cb1-1dcb-460e-ba19-eb8bef1951b5"). InnerVolumeSpecName "kube-api-access-vggbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.955050 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bd21-account-create-update-kcn9v" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.955046 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bd21-account-create-update-kcn9v" event={"ID":"56349fdd-8b87-4910-b182-555b5913d5ee","Type":"ContainerDied","Data":"f9bb2d66e25bd07650a62fc4ddaa3bf964c84c4c8996178f6cc499147ca25363"} Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.955486 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9bb2d66e25bd07650a62fc4ddaa3bf964c84c4c8996178f6cc499147ca25363" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.959536 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rw8qr" event={"ID":"26bd1cb1-1dcb-460e-ba19-eb8bef1951b5","Type":"ContainerDied","Data":"1d674917a20214787e1b8129748fdeaa37c9d2e1ee0acfb9283d23f2c9010653"} Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.959661 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d674917a20214787e1b8129748fdeaa37c9d2e1ee0acfb9283d23f2c9010653" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.959624 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rw8qr" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.962011 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5833-account-create-update-fxm2t" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.962046 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5833-account-create-update-fxm2t" event={"ID":"7fa1fef8-5a2e-4518-8641-d4b594fc29a3","Type":"ContainerDied","Data":"ce3621b623070bf468ed09862ce254a24da7ba911cfd39df905bb1ca3d03fb1e"} Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.962232 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce3621b623070bf468ed09862ce254a24da7ba911cfd39df905bb1ca3d03fb1e" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.962668 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vggbn\" (UniqueName: \"kubernetes.io/projected/26bd1cb1-1dcb-460e-ba19-eb8bef1951b5-kube-api-access-vggbn\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.962694 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7g9z\" (UniqueName: \"kubernetes.io/projected/35590261-332c-47e0-89e9-4eef3fd36086-kube-api-access-c7g9z\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.962707 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jk24q\" (UniqueName: \"kubernetes.io/projected/56349fdd-8b87-4910-b182-555b5913d5ee-kube-api-access-jk24q\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.962720 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26bd1cb1-1dcb-460e-ba19-eb8bef1951b5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.962731 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56349fdd-8b87-4910-b182-555b5913d5ee-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.962742 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4v6z\" (UniqueName: \"kubernetes.io/projected/64352a4d-f3af-44e1-b1d7-cc5e125de560-kube-api-access-q4v6z\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.964151 4932 generic.go:334] "Generic (PLEG): container finished" podID="4a133994-7b33-4db4-a923-5b90d51e47b9" containerID="f215ae9fbece324e9bd723a56e0e71d31c81d0090f9fea3975b162ab4d64e974" exitCode=0 Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.964189 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"4a133994-7b33-4db4-a923-5b90d51e47b9","Type":"ContainerDied","Data":"f215ae9fbece324e9bd723a56e0e71d31c81d0090f9fea3975b162ab4d64e974"} Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.966959 4932 generic.go:334] "Generic (PLEG): container finished" podID="cd547864-4d03-45ae-8bb1-10a360d36599" containerID="7410562445bbd85ecddd8f8fa1c64974cd82f5bccf5b814dba01368f2c897a68" exitCode=0 Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.967057 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cd547864-4d03-45ae-8bb1-10a360d36599","Type":"ContainerDied","Data":"7410562445bbd85ecddd8f8fa1c64974cd82f5bccf5b814dba01368f2c897a68"} Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.971663 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e952-account-create-update-jjrs6" event={"ID":"35590261-332c-47e0-89e9-4eef3fd36086","Type":"ContainerDied","Data":"a57a4488cd2456a40afe2cc7b60575c26f106f5d96e0170780f8f565f70f3047"} Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.971729 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a57a4488cd2456a40afe2cc7b60575c26f106f5d96e0170780f8f565f70f3047" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.971823 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e952-account-create-update-jjrs6" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.978417 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-js74w" event={"ID":"c4c8a6a6-4944-4c6f-be98-9dde833b89e5","Type":"ContainerDied","Data":"a795285cf0cb2a54f1126955d37a9b6d8e276565b9e0d79ceb7a3f9ba32bad9b"} Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.978492 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a795285cf0cb2a54f1126955d37a9b6d8e276565b9e0d79ceb7a3f9ba32bad9b" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.978545 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-js74w" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.982226 4932 generic.go:334] "Generic (PLEG): container finished" podID="7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" containerID="9b22c88fcfefc922bca187e413f9cbdc5c39e702add0f5abab74ad8e01c84d8d" exitCode=0 Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.982519 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c","Type":"ContainerDied","Data":"9b22c88fcfefc922bca187e413f9cbdc5c39e702add0f5abab74ad8e01c84d8d"} Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.990094 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zhvln" Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.990368 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zhvln" event={"ID":"64352a4d-f3af-44e1-b1d7-cc5e125de560","Type":"ContainerDied","Data":"e989b7ee4c814fc4ee53473b2356b223211c09b3f0c143affed0c93ec3ad0f14"} Feb 18 19:52:39 crc kubenswrapper[4932]: I0218 19:52:39.990414 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e989b7ee4c814fc4ee53473b2356b223211c09b3f0c143affed0c93ec3ad0f14" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.088376 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-99qbh" podUID="039d44bb-1ad0-4916-8ef2-3cece4829506" containerName="ovn-controller" probeResult="failure" output=< Feb 18 19:52:40 crc kubenswrapper[4932]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 18 19:52:40 crc kubenswrapper[4932]: > Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.147605 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.158849 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-lvg9q" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.400922 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-734d-account-create-update-stk6x" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.420653 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-99qbh-config-487hr"] Feb 18 19:52:40 crc kubenswrapper[4932]: E0218 19:52:40.420987 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64352a4d-f3af-44e1-b1d7-cc5e125de560" containerName="mariadb-database-create" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.421002 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="64352a4d-f3af-44e1-b1d7-cc5e125de560" containerName="mariadb-database-create" Feb 18 19:52:40 crc kubenswrapper[4932]: E0218 19:52:40.421013 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bec590bc-e2ef-49e0-80be-27af6f69aa06" containerName="mariadb-account-create-update" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.421020 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="bec590bc-e2ef-49e0-80be-27af6f69aa06" containerName="mariadb-account-create-update" Feb 18 19:52:40 crc kubenswrapper[4932]: E0218 19:52:40.421029 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56349fdd-8b87-4910-b182-555b5913d5ee" containerName="mariadb-account-create-update" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.421035 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="56349fdd-8b87-4910-b182-555b5913d5ee" containerName="mariadb-account-create-update" Feb 18 19:52:40 crc kubenswrapper[4932]: E0218 19:52:40.421046 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26bd1cb1-1dcb-460e-ba19-eb8bef1951b5" containerName="mariadb-database-create" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.421051 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="26bd1cb1-1dcb-460e-ba19-eb8bef1951b5" containerName="mariadb-database-create" Feb 18 19:52:40 crc kubenswrapper[4932]: E0218 19:52:40.421071 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fa1fef8-5a2e-4518-8641-d4b594fc29a3" containerName="mariadb-account-create-update" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.421076 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fa1fef8-5a2e-4518-8641-d4b594fc29a3" containerName="mariadb-account-create-update" Feb 18 19:52:40 crc kubenswrapper[4932]: E0218 19:52:40.421090 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35590261-332c-47e0-89e9-4eef3fd36086" containerName="mariadb-account-create-update" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.421096 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="35590261-332c-47e0-89e9-4eef3fd36086" containerName="mariadb-account-create-update" Feb 18 19:52:40 crc kubenswrapper[4932]: E0218 19:52:40.421106 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4c8a6a6-4944-4c6f-be98-9dde833b89e5" containerName="mariadb-database-create" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.421112 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4c8a6a6-4944-4c6f-be98-9dde833b89e5" containerName="mariadb-database-create" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.421482 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="26bd1cb1-1dcb-460e-ba19-eb8bef1951b5" containerName="mariadb-database-create" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.421496 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="35590261-332c-47e0-89e9-4eef3fd36086" containerName="mariadb-account-create-update" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.421507 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="bec590bc-e2ef-49e0-80be-27af6f69aa06" containerName="mariadb-account-create-update" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.421516 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fa1fef8-5a2e-4518-8641-d4b594fc29a3" containerName="mariadb-account-create-update" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.421525 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="64352a4d-f3af-44e1-b1d7-cc5e125de560" containerName="mariadb-database-create" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.421534 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4c8a6a6-4944-4c6f-be98-9dde833b89e5" containerName="mariadb-database-create" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.421543 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="56349fdd-8b87-4910-b182-555b5913d5ee" containerName="mariadb-account-create-update" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.422096 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.427467 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.432705 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-99qbh-config-487hr"] Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.483104 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bec590bc-e2ef-49e0-80be-27af6f69aa06-operator-scripts\") pod \"bec590bc-e2ef-49e0-80be-27af6f69aa06\" (UID: \"bec590bc-e2ef-49e0-80be-27af6f69aa06\") " Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.483323 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hk7r\" (UniqueName: \"kubernetes.io/projected/bec590bc-e2ef-49e0-80be-27af6f69aa06-kube-api-access-4hk7r\") pod \"bec590bc-e2ef-49e0-80be-27af6f69aa06\" (UID: \"bec590bc-e2ef-49e0-80be-27af6f69aa06\") " Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.484150 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bec590bc-e2ef-49e0-80be-27af6f69aa06-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bec590bc-e2ef-49e0-80be-27af6f69aa06" (UID: "bec590bc-e2ef-49e0-80be-27af6f69aa06"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.487886 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bec590bc-e2ef-49e0-80be-27af6f69aa06-kube-api-access-4hk7r" (OuterVolumeSpecName: "kube-api-access-4hk7r") pod "bec590bc-e2ef-49e0-80be-27af6f69aa06" (UID: "bec590bc-e2ef-49e0-80be-27af6f69aa06"). InnerVolumeSpecName "kube-api-access-4hk7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.489856 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-vtbzd" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.584803 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzgsw\" (UniqueName: \"kubernetes.io/projected/02bb1c31-7377-432f-8434-72981200f1ac-kube-api-access-dzgsw\") pod \"02bb1c31-7377-432f-8434-72981200f1ac\" (UID: \"02bb1c31-7377-432f-8434-72981200f1ac\") " Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.584889 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02bb1c31-7377-432f-8434-72981200f1ac-operator-scripts\") pod \"02bb1c31-7377-432f-8434-72981200f1ac\" (UID: \"02bb1c31-7377-432f-8434-72981200f1ac\") " Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.585111 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-run\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.585142 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69b0a2f7-a409-4d7e-b126-7b494c71503c-scripts\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.585193 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-log-ovn\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.585246 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/69b0a2f7-a409-4d7e-b126-7b494c71503c-additional-scripts\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.585271 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-run-ovn\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.585312 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tpft\" (UniqueName: \"kubernetes.io/projected/69b0a2f7-a409-4d7e-b126-7b494c71503c-kube-api-access-4tpft\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.585386 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02bb1c31-7377-432f-8434-72981200f1ac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "02bb1c31-7377-432f-8434-72981200f1ac" (UID: "02bb1c31-7377-432f-8434-72981200f1ac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.585586 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hk7r\" (UniqueName: \"kubernetes.io/projected/bec590bc-e2ef-49e0-80be-27af6f69aa06-kube-api-access-4hk7r\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.585605 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02bb1c31-7377-432f-8434-72981200f1ac-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.585617 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bec590bc-e2ef-49e0-80be-27af6f69aa06-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.588985 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02bb1c31-7377-432f-8434-72981200f1ac-kube-api-access-dzgsw" (OuterVolumeSpecName: "kube-api-access-dzgsw") pod "02bb1c31-7377-432f-8434-72981200f1ac" (UID: "02bb1c31-7377-432f-8434-72981200f1ac"). InnerVolumeSpecName "kube-api-access-dzgsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.686484 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/69b0a2f7-a409-4d7e-b126-7b494c71503c-additional-scripts\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.686724 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-run-ovn\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.686855 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tpft\" (UniqueName: \"kubernetes.io/projected/69b0a2f7-a409-4d7e-b126-7b494c71503c-kube-api-access-4tpft\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.686977 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-run\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.687057 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69b0a2f7-a409-4d7e-b126-7b494c71503c-scripts\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.687110 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-run\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.687110 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-run-ovn\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.687366 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-log-ovn\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.687498 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-log-ovn\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.687668 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzgsw\" (UniqueName: \"kubernetes.io/projected/02bb1c31-7377-432f-8434-72981200f1ac-kube-api-access-dzgsw\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.688553 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/69b0a2f7-a409-4d7e-b126-7b494c71503c-additional-scripts\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.689479 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69b0a2f7-a409-4d7e-b126-7b494c71503c-scripts\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.708097 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tpft\" (UniqueName: \"kubernetes.io/projected/69b0a2f7-a409-4d7e-b126-7b494c71503c-kube-api-access-4tpft\") pod \"ovn-controller-99qbh-config-487hr\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:40 crc kubenswrapper[4932]: I0218 19:52:40.789216 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.001866 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"4a133994-7b33-4db4-a923-5b90d51e47b9","Type":"ContainerStarted","Data":"b79edae97107fb27d431eaa24e13cb7b0ff20b985becaeebac3ad72d18abaf73"} Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.002543 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.006453 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cd547864-4d03-45ae-8bb1-10a360d36599","Type":"ContainerStarted","Data":"70c0ba22a4bf84fc3b05812bcef99a157180fd838ac2af05d6ca1de21cd9e980"} Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.006676 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.011257 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-vtbzd" Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.011264 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-vtbzd" event={"ID":"02bb1c31-7377-432f-8434-72981200f1ac","Type":"ContainerDied","Data":"958a01ac916323c31c5c47742739c2ae448f55f9d886b9c1893b8e6c38c03bbf"} Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.011638 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="958a01ac916323c31c5c47742739c2ae448f55f9d886b9c1893b8e6c38c03bbf" Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.015042 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c","Type":"ContainerStarted","Data":"7729eaea63a854a517a637abed7df32d4a9c6148c615614b5fc85be3ac6bd1d9"} Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.015831 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.021207 4932 generic.go:334] "Generic (PLEG): container finished" podID="04953cd9-9de3-46b5-8b86-382b2d2291cd" containerID="3511d7edf13acf4b55c85650c80b80f04682a6a62d5515928313b0d0eefcc028" exitCode=0 Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.021272 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sq9sk" event={"ID":"04953cd9-9de3-46b5-8b86-382b2d2291cd","Type":"ContainerDied","Data":"3511d7edf13acf4b55c85650c80b80f04682a6a62d5515928313b0d0eefcc028"} Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.024594 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-734d-account-create-update-stk6x" Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.024679 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-734d-account-create-update-stk6x" event={"ID":"bec590bc-e2ef-49e0-80be-27af6f69aa06","Type":"ContainerDied","Data":"9bf0c8b6e14124204af0268e3540567cb7b036f9d1ead456934ebd8e07330a8e"} Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.024702 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bf0c8b6e14124204af0268e3540567cb7b036f9d1ead456934ebd8e07330a8e" Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.039410 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/notifications-rabbitmq-server-0" podStartSLOduration=53.604612272 podStartE2EDuration="1m1.039391579s" podCreationTimestamp="2026-02-18 19:51:40 +0000 UTC" firstStartedPulling="2026-02-18 19:51:56.87491046 +0000 UTC m=+1080.456865325" lastFinishedPulling="2026-02-18 19:52:04.309689787 +0000 UTC m=+1087.891644632" observedRunningTime="2026-02-18 19:52:41.034789585 +0000 UTC m=+1124.616744430" watchObservedRunningTime="2026-02-18 19:52:41.039391579 +0000 UTC m=+1124.621346424" Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.088671 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=55.194396138 podStartE2EDuration="1m2.088653542s" podCreationTimestamp="2026-02-18 19:51:39 +0000 UTC" firstStartedPulling="2026-02-18 19:51:57.490743719 +0000 UTC m=+1081.072698554" lastFinishedPulling="2026-02-18 19:52:04.385001113 +0000 UTC m=+1087.966955958" observedRunningTime="2026-02-18 19:52:41.085645348 +0000 UTC m=+1124.667600193" watchObservedRunningTime="2026-02-18 19:52:41.088653542 +0000 UTC m=+1124.670608387" Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.253884 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=53.721346595 podStartE2EDuration="1m1.25386013s" podCreationTimestamp="2026-02-18 19:51:40 +0000 UTC" firstStartedPulling="2026-02-18 19:51:56.851492633 +0000 UTC m=+1080.433447478" lastFinishedPulling="2026-02-18 19:52:04.384006168 +0000 UTC m=+1087.965961013" observedRunningTime="2026-02-18 19:52:41.111653378 +0000 UTC m=+1124.693608223" watchObservedRunningTime="2026-02-18 19:52:41.25386013 +0000 UTC m=+1124.835814975" Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.261110 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-99qbh-config-487hr"] Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.879547 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-zwtz9"] Feb 18 19:52:41 crc kubenswrapper[4932]: E0218 19:52:41.881564 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02bb1c31-7377-432f-8434-72981200f1ac" containerName="mariadb-database-create" Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.881666 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="02bb1c31-7377-432f-8434-72981200f1ac" containerName="mariadb-database-create" Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.881965 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="02bb1c31-7377-432f-8434-72981200f1ac" containerName="mariadb-database-create" Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.882701 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zwtz9" Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.887749 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 18 19:52:41 crc kubenswrapper[4932]: I0218 19:52:41.894905 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zwtz9"] Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.014252 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70c7b26f-6d2e-4fcd-8240-ca10bd148c99-operator-scripts\") pod \"root-account-create-update-zwtz9\" (UID: \"70c7b26f-6d2e-4fcd-8240-ca10bd148c99\") " pod="openstack/root-account-create-update-zwtz9" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.014368 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcg8c\" (UniqueName: \"kubernetes.io/projected/70c7b26f-6d2e-4fcd-8240-ca10bd148c99-kube-api-access-lcg8c\") pod \"root-account-create-update-zwtz9\" (UID: \"70c7b26f-6d2e-4fcd-8240-ca10bd148c99\") " pod="openstack/root-account-create-update-zwtz9" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.033395 4932 generic.go:334] "Generic (PLEG): container finished" podID="69b0a2f7-a409-4d7e-b126-7b494c71503c" containerID="da426b82651806673889b52158bea2dd7d720c322fbc355879403c25885c3ec1" exitCode=0 Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.033928 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-99qbh-config-487hr" event={"ID":"69b0a2f7-a409-4d7e-b126-7b494c71503c","Type":"ContainerDied","Data":"da426b82651806673889b52158bea2dd7d720c322fbc355879403c25885c3ec1"} Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.033957 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-99qbh-config-487hr" event={"ID":"69b0a2f7-a409-4d7e-b126-7b494c71503c","Type":"ContainerStarted","Data":"411c6fe91d47e61d1c94fa3daa641a9d9133f35200c9e298952ec6b73ebd7b7a"} Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.115695 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70c7b26f-6d2e-4fcd-8240-ca10bd148c99-operator-scripts\") pod \"root-account-create-update-zwtz9\" (UID: \"70c7b26f-6d2e-4fcd-8240-ca10bd148c99\") " pod="openstack/root-account-create-update-zwtz9" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.115894 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcg8c\" (UniqueName: \"kubernetes.io/projected/70c7b26f-6d2e-4fcd-8240-ca10bd148c99-kube-api-access-lcg8c\") pod \"root-account-create-update-zwtz9\" (UID: \"70c7b26f-6d2e-4fcd-8240-ca10bd148c99\") " pod="openstack/root-account-create-update-zwtz9" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.116798 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70c7b26f-6d2e-4fcd-8240-ca10bd148c99-operator-scripts\") pod \"root-account-create-update-zwtz9\" (UID: \"70c7b26f-6d2e-4fcd-8240-ca10bd148c99\") " pod="openstack/root-account-create-update-zwtz9" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.157133 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcg8c\" (UniqueName: \"kubernetes.io/projected/70c7b26f-6d2e-4fcd-8240-ca10bd148c99-kube-api-access-lcg8c\") pod \"root-account-create-update-zwtz9\" (UID: \"70c7b26f-6d2e-4fcd-8240-ca10bd148c99\") " pod="openstack/root-account-create-update-zwtz9" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.205385 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zwtz9" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.519861 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.622029 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-dispersionconf\") pod \"04953cd9-9de3-46b5-8b86-382b2d2291cd\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.622092 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf5rv\" (UniqueName: \"kubernetes.io/projected/04953cd9-9de3-46b5-8b86-382b2d2291cd-kube-api-access-zf5rv\") pod \"04953cd9-9de3-46b5-8b86-382b2d2291cd\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.622782 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-combined-ca-bundle\") pod \"04953cd9-9de3-46b5-8b86-382b2d2291cd\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.622958 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-swiftconf\") pod \"04953cd9-9de3-46b5-8b86-382b2d2291cd\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.622989 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04953cd9-9de3-46b5-8b86-382b2d2291cd-scripts\") pod \"04953cd9-9de3-46b5-8b86-382b2d2291cd\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.623017 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/04953cd9-9de3-46b5-8b86-382b2d2291cd-ring-data-devices\") pod \"04953cd9-9de3-46b5-8b86-382b2d2291cd\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.623040 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/04953cd9-9de3-46b5-8b86-382b2d2291cd-etc-swift\") pod \"04953cd9-9de3-46b5-8b86-382b2d2291cd\" (UID: \"04953cd9-9de3-46b5-8b86-382b2d2291cd\") " Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.623785 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04953cd9-9de3-46b5-8b86-382b2d2291cd-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "04953cd9-9de3-46b5-8b86-382b2d2291cd" (UID: "04953cd9-9de3-46b5-8b86-382b2d2291cd"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.624366 4932 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/04953cd9-9de3-46b5-8b86-382b2d2291cd-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.625600 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04953cd9-9de3-46b5-8b86-382b2d2291cd-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "04953cd9-9de3-46b5-8b86-382b2d2291cd" (UID: "04953cd9-9de3-46b5-8b86-382b2d2291cd"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.632548 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "04953cd9-9de3-46b5-8b86-382b2d2291cd" (UID: "04953cd9-9de3-46b5-8b86-382b2d2291cd"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.646094 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "04953cd9-9de3-46b5-8b86-382b2d2291cd" (UID: "04953cd9-9de3-46b5-8b86-382b2d2291cd"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.647496 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04953cd9-9de3-46b5-8b86-382b2d2291cd-kube-api-access-zf5rv" (OuterVolumeSpecName: "kube-api-access-zf5rv") pod "04953cd9-9de3-46b5-8b86-382b2d2291cd" (UID: "04953cd9-9de3-46b5-8b86-382b2d2291cd"). InnerVolumeSpecName "kube-api-access-zf5rv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.653322 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04953cd9-9de3-46b5-8b86-382b2d2291cd" (UID: "04953cd9-9de3-46b5-8b86-382b2d2291cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.653663 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04953cd9-9de3-46b5-8b86-382b2d2291cd-scripts" (OuterVolumeSpecName: "scripts") pod "04953cd9-9de3-46b5-8b86-382b2d2291cd" (UID: "04953cd9-9de3-46b5-8b86-382b2d2291cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.676613 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zwtz9"] Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.726234 4932 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.726266 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf5rv\" (UniqueName: \"kubernetes.io/projected/04953cd9-9de3-46b5-8b86-382b2d2291cd-kube-api-access-zf5rv\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.726276 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.726284 4932 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/04953cd9-9de3-46b5-8b86-382b2d2291cd-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.726293 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04953cd9-9de3-46b5-8b86-382b2d2291cd-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:42 crc kubenswrapper[4932]: I0218 19:52:42.726301 4932 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/04953cd9-9de3-46b5-8b86-382b2d2291cd-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.041901 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zwtz9" event={"ID":"70c7b26f-6d2e-4fcd-8240-ca10bd148c99","Type":"ContainerStarted","Data":"f9f90dc57da26de1688aea88788204ac610c6fa3970ee4965c6add216640da6a"} Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.042954 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zwtz9" event={"ID":"70c7b26f-6d2e-4fcd-8240-ca10bd148c99","Type":"ContainerStarted","Data":"04e61d84b5f812d0e74e2f99dfbb5e3d031730c69da014b9da8a1364dff80ae4"} Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.043821 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-sq9sk" Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.044841 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-sq9sk" event={"ID":"04953cd9-9de3-46b5-8b86-382b2d2291cd","Type":"ContainerDied","Data":"ed4ba0f7587a73b183dcf28620debb7555eadcf63d796c9e3aed7de82b80093c"} Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.044945 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed4ba0f7587a73b183dcf28620debb7555eadcf63d796c9e3aed7de82b80093c" Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.063064 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-zwtz9" podStartSLOduration=2.06304575 podStartE2EDuration="2.06304575s" podCreationTimestamp="2026-02-18 19:52:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:52:43.059111353 +0000 UTC m=+1126.641066218" watchObservedRunningTime="2026-02-18 19:52:43.06304575 +0000 UTC m=+1126.645000595" Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.460183 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.540140 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69b0a2f7-a409-4d7e-b126-7b494c71503c-scripts\") pod \"69b0a2f7-a409-4d7e-b126-7b494c71503c\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.540213 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tpft\" (UniqueName: \"kubernetes.io/projected/69b0a2f7-a409-4d7e-b126-7b494c71503c-kube-api-access-4tpft\") pod \"69b0a2f7-a409-4d7e-b126-7b494c71503c\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.540268 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-run\") pod \"69b0a2f7-a409-4d7e-b126-7b494c71503c\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.540297 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/69b0a2f7-a409-4d7e-b126-7b494c71503c-additional-scripts\") pod \"69b0a2f7-a409-4d7e-b126-7b494c71503c\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.540348 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-run-ovn\") pod \"69b0a2f7-a409-4d7e-b126-7b494c71503c\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.540480 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-log-ovn\") pod \"69b0a2f7-a409-4d7e-b126-7b494c71503c\" (UID: \"69b0a2f7-a409-4d7e-b126-7b494c71503c\") " Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.540819 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "69b0a2f7-a409-4d7e-b126-7b494c71503c" (UID: "69b0a2f7-a409-4d7e-b126-7b494c71503c"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.540855 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-run" (OuterVolumeSpecName: "var-run") pod "69b0a2f7-a409-4d7e-b126-7b494c71503c" (UID: "69b0a2f7-a409-4d7e-b126-7b494c71503c"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.541610 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69b0a2f7-a409-4d7e-b126-7b494c71503c-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "69b0a2f7-a409-4d7e-b126-7b494c71503c" (UID: "69b0a2f7-a409-4d7e-b126-7b494c71503c"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.541651 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "69b0a2f7-a409-4d7e-b126-7b494c71503c" (UID: "69b0a2f7-a409-4d7e-b126-7b494c71503c"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.542048 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69b0a2f7-a409-4d7e-b126-7b494c71503c-scripts" (OuterVolumeSpecName: "scripts") pod "69b0a2f7-a409-4d7e-b126-7b494c71503c" (UID: "69b0a2f7-a409-4d7e-b126-7b494c71503c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.545319 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69b0a2f7-a409-4d7e-b126-7b494c71503c-kube-api-access-4tpft" (OuterVolumeSpecName: "kube-api-access-4tpft") pod "69b0a2f7-a409-4d7e-b126-7b494c71503c" (UID: "69b0a2f7-a409-4d7e-b126-7b494c71503c"). InnerVolumeSpecName "kube-api-access-4tpft". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.642834 4932 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.642870 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69b0a2f7-a409-4d7e-b126-7b494c71503c-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.642881 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tpft\" (UniqueName: \"kubernetes.io/projected/69b0a2f7-a409-4d7e-b126-7b494c71503c-kube-api-access-4tpft\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.642893 4932 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-run\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.642903 4932 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/69b0a2f7-a409-4d7e-b126-7b494c71503c-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:43 crc kubenswrapper[4932]: I0218 19:52:43.642911 4932 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/69b0a2f7-a409-4d7e-b126-7b494c71503c-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:44 crc kubenswrapper[4932]: I0218 19:52:44.052422 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-99qbh-config-487hr" event={"ID":"69b0a2f7-a409-4d7e-b126-7b494c71503c","Type":"ContainerDied","Data":"411c6fe91d47e61d1c94fa3daa641a9d9133f35200c9e298952ec6b73ebd7b7a"} Feb 18 19:52:44 crc kubenswrapper[4932]: I0218 19:52:44.052466 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="411c6fe91d47e61d1c94fa3daa641a9d9133f35200c9e298952ec6b73ebd7b7a" Feb 18 19:52:44 crc kubenswrapper[4932]: I0218 19:52:44.052533 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-99qbh-config-487hr" Feb 18 19:52:44 crc kubenswrapper[4932]: I0218 19:52:44.063551 4932 generic.go:334] "Generic (PLEG): container finished" podID="70c7b26f-6d2e-4fcd-8240-ca10bd148c99" containerID="f9f90dc57da26de1688aea88788204ac610c6fa3970ee4965c6add216640da6a" exitCode=0 Feb 18 19:52:44 crc kubenswrapper[4932]: I0218 19:52:44.063600 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zwtz9" event={"ID":"70c7b26f-6d2e-4fcd-8240-ca10bd148c99","Type":"ContainerDied","Data":"f9f90dc57da26de1688aea88788204ac610c6fa3970ee4965c6add216640da6a"} Feb 18 19:52:44 crc kubenswrapper[4932]: I0218 19:52:44.575266 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-99qbh-config-487hr"] Feb 18 19:52:44 crc kubenswrapper[4932]: I0218 19:52:44.578903 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-99qbh-config-487hr"] Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.075015 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-rl7xx"] Feb 18 19:52:45 crc kubenswrapper[4932]: E0218 19:52:45.075373 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b0a2f7-a409-4d7e-b126-7b494c71503c" containerName="ovn-config" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.075387 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b0a2f7-a409-4d7e-b126-7b494c71503c" containerName="ovn-config" Feb 18 19:52:45 crc kubenswrapper[4932]: E0218 19:52:45.075401 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04953cd9-9de3-46b5-8b86-382b2d2291cd" containerName="swift-ring-rebalance" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.075406 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="04953cd9-9de3-46b5-8b86-382b2d2291cd" containerName="swift-ring-rebalance" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.075569 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="69b0a2f7-a409-4d7e-b126-7b494c71503c" containerName="ovn-config" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.075583 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="04953cd9-9de3-46b5-8b86-382b2d2291cd" containerName="swift-ring-rebalance" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.076127 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rl7xx" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.078485 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.078740 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mx5f7" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.097939 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-99qbh" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.098740 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-rl7xx"] Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.167681 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-combined-ca-bundle\") pod \"glance-db-sync-rl7xx\" (UID: \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\") " pod="openstack/glance-db-sync-rl7xx" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.167835 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-db-sync-config-data\") pod \"glance-db-sync-rl7xx\" (UID: \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\") " pod="openstack/glance-db-sync-rl7xx" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.168066 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlv4k\" (UniqueName: \"kubernetes.io/projected/1bbf2873-6ca9-4569-b5b6-3003511c02ba-kube-api-access-qlv4k\") pod \"glance-db-sync-rl7xx\" (UID: \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\") " pod="openstack/glance-db-sync-rl7xx" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.168147 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-config-data\") pod \"glance-db-sync-rl7xx\" (UID: \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\") " pod="openstack/glance-db-sync-rl7xx" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.189824 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69b0a2f7-a409-4d7e-b126-7b494c71503c" path="/var/lib/kubelet/pods/69b0a2f7-a409-4d7e-b126-7b494c71503c/volumes" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.270049 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlv4k\" (UniqueName: \"kubernetes.io/projected/1bbf2873-6ca9-4569-b5b6-3003511c02ba-kube-api-access-qlv4k\") pod \"glance-db-sync-rl7xx\" (UID: \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\") " pod="openstack/glance-db-sync-rl7xx" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.270109 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-config-data\") pod \"glance-db-sync-rl7xx\" (UID: \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\") " pod="openstack/glance-db-sync-rl7xx" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.270183 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-combined-ca-bundle\") pod \"glance-db-sync-rl7xx\" (UID: \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\") " pod="openstack/glance-db-sync-rl7xx" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.270226 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-db-sync-config-data\") pod \"glance-db-sync-rl7xx\" (UID: \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\") " pod="openstack/glance-db-sync-rl7xx" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.275876 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-db-sync-config-data\") pod \"glance-db-sync-rl7xx\" (UID: \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\") " pod="openstack/glance-db-sync-rl7xx" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.277787 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-config-data\") pod \"glance-db-sync-rl7xx\" (UID: \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\") " pod="openstack/glance-db-sync-rl7xx" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.279029 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-combined-ca-bundle\") pod \"glance-db-sync-rl7xx\" (UID: \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\") " pod="openstack/glance-db-sync-rl7xx" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.293758 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlv4k\" (UniqueName: \"kubernetes.io/projected/1bbf2873-6ca9-4569-b5b6-3003511c02ba-kube-api-access-qlv4k\") pod \"glance-db-sync-rl7xx\" (UID: \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\") " pod="openstack/glance-db-sync-rl7xx" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.393298 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zwtz9" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.395900 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rl7xx" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.473237 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcg8c\" (UniqueName: \"kubernetes.io/projected/70c7b26f-6d2e-4fcd-8240-ca10bd148c99-kube-api-access-lcg8c\") pod \"70c7b26f-6d2e-4fcd-8240-ca10bd148c99\" (UID: \"70c7b26f-6d2e-4fcd-8240-ca10bd148c99\") " Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.474685 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70c7b26f-6d2e-4fcd-8240-ca10bd148c99-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "70c7b26f-6d2e-4fcd-8240-ca10bd148c99" (UID: "70c7b26f-6d2e-4fcd-8240-ca10bd148c99"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.474752 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70c7b26f-6d2e-4fcd-8240-ca10bd148c99-operator-scripts\") pod \"70c7b26f-6d2e-4fcd-8240-ca10bd148c99\" (UID: \"70c7b26f-6d2e-4fcd-8240-ca10bd148c99\") " Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.475520 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70c7b26f-6d2e-4fcd-8240-ca10bd148c99-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.478310 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70c7b26f-6d2e-4fcd-8240-ca10bd148c99-kube-api-access-lcg8c" (OuterVolumeSpecName: "kube-api-access-lcg8c") pod "70c7b26f-6d2e-4fcd-8240-ca10bd148c99" (UID: "70c7b26f-6d2e-4fcd-8240-ca10bd148c99"). InnerVolumeSpecName "kube-api-access-lcg8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.576878 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcg8c\" (UniqueName: \"kubernetes.io/projected/70c7b26f-6d2e-4fcd-8240-ca10bd148c99-kube-api-access-lcg8c\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:45 crc kubenswrapper[4932]: I0218 19:52:45.970281 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-rl7xx"] Feb 18 19:52:46 crc kubenswrapper[4932]: I0218 19:52:46.081672 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zwtz9" Feb 18 19:52:46 crc kubenswrapper[4932]: I0218 19:52:46.081679 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zwtz9" event={"ID":"70c7b26f-6d2e-4fcd-8240-ca10bd148c99","Type":"ContainerDied","Data":"04e61d84b5f812d0e74e2f99dfbb5e3d031730c69da014b9da8a1364dff80ae4"} Feb 18 19:52:46 crc kubenswrapper[4932]: I0218 19:52:46.081739 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04e61d84b5f812d0e74e2f99dfbb5e3d031730c69da014b9da8a1364dff80ae4" Feb 18 19:52:46 crc kubenswrapper[4932]: I0218 19:52:46.082684 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rl7xx" event={"ID":"1bbf2873-6ca9-4569-b5b6-3003511c02ba","Type":"ContainerStarted","Data":"f068e85210ddcd828af2d489d54882cc64dba8b583684c4c6f7597bf8f804826"} Feb 18 19:52:48 crc kubenswrapper[4932]: I0218 19:52:48.246864 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zwtz9"] Feb 18 19:52:48 crc kubenswrapper[4932]: I0218 19:52:48.260912 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-zwtz9"] Feb 18 19:52:49 crc kubenswrapper[4932]: I0218 19:52:49.189234 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70c7b26f-6d2e-4fcd-8240-ca10bd148c99" path="/var/lib/kubelet/pods/70c7b26f-6d2e-4fcd-8240-ca10bd148c99/volumes" Feb 18 19:52:50 crc kubenswrapper[4932]: I0218 19:52:50.459350 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:50 crc kubenswrapper[4932]: I0218 19:52:50.466106 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5-etc-swift\") pod \"swift-storage-0\" (UID: \"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5\") " pod="openstack/swift-storage-0" Feb 18 19:52:50 crc kubenswrapper[4932]: I0218 19:52:50.595918 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 18 19:52:51 crc kubenswrapper[4932]: I0218 19:52:51.115867 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.105:5671: connect: connection refused" Feb 18 19:52:51 crc kubenswrapper[4932]: I0218 19:52:51.425479 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="cd547864-4d03-45ae-8bb1-10a360d36599" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Feb 18 19:52:51 crc kubenswrapper[4932]: I0218 19:52:51.761000 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/notifications-rabbitmq-server-0" podUID="4a133994-7b33-4db4-a923-5b90d51e47b9" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Feb 18 19:52:53 crc kubenswrapper[4932]: I0218 19:52:53.254572 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-xbdgt"] Feb 18 19:52:53 crc kubenswrapper[4932]: E0218 19:52:53.255774 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c7b26f-6d2e-4fcd-8240-ca10bd148c99" containerName="mariadb-account-create-update" Feb 18 19:52:53 crc kubenswrapper[4932]: I0218 19:52:53.255793 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c7b26f-6d2e-4fcd-8240-ca10bd148c99" containerName="mariadb-account-create-update" Feb 18 19:52:53 crc kubenswrapper[4932]: I0218 19:52:53.256018 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c7b26f-6d2e-4fcd-8240-ca10bd148c99" containerName="mariadb-account-create-update" Feb 18 19:52:53 crc kubenswrapper[4932]: I0218 19:52:53.256658 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xbdgt" Feb 18 19:52:53 crc kubenswrapper[4932]: I0218 19:52:53.263958 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 18 19:52:53 crc kubenswrapper[4932]: I0218 19:52:53.270344 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xbdgt"] Feb 18 19:52:53 crc kubenswrapper[4932]: I0218 19:52:53.411632 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h44j5\" (UniqueName: \"kubernetes.io/projected/3eb4a050-ebc6-4319-b27f-9c9cce058ec1-kube-api-access-h44j5\") pod \"root-account-create-update-xbdgt\" (UID: \"3eb4a050-ebc6-4319-b27f-9c9cce058ec1\") " pod="openstack/root-account-create-update-xbdgt" Feb 18 19:52:53 crc kubenswrapper[4932]: I0218 19:52:53.411742 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3eb4a050-ebc6-4319-b27f-9c9cce058ec1-operator-scripts\") pod \"root-account-create-update-xbdgt\" (UID: \"3eb4a050-ebc6-4319-b27f-9c9cce058ec1\") " pod="openstack/root-account-create-update-xbdgt" Feb 18 19:52:53 crc kubenswrapper[4932]: I0218 19:52:53.513137 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3eb4a050-ebc6-4319-b27f-9c9cce058ec1-operator-scripts\") pod \"root-account-create-update-xbdgt\" (UID: \"3eb4a050-ebc6-4319-b27f-9c9cce058ec1\") " pod="openstack/root-account-create-update-xbdgt" Feb 18 19:52:53 crc kubenswrapper[4932]: I0218 19:52:53.513283 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h44j5\" (UniqueName: \"kubernetes.io/projected/3eb4a050-ebc6-4319-b27f-9c9cce058ec1-kube-api-access-h44j5\") pod \"root-account-create-update-xbdgt\" (UID: \"3eb4a050-ebc6-4319-b27f-9c9cce058ec1\") " pod="openstack/root-account-create-update-xbdgt" Feb 18 19:52:53 crc kubenswrapper[4932]: I0218 19:52:53.514225 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3eb4a050-ebc6-4319-b27f-9c9cce058ec1-operator-scripts\") pod \"root-account-create-update-xbdgt\" (UID: \"3eb4a050-ebc6-4319-b27f-9c9cce058ec1\") " pod="openstack/root-account-create-update-xbdgt" Feb 18 19:52:53 crc kubenswrapper[4932]: I0218 19:52:53.535164 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h44j5\" (UniqueName: \"kubernetes.io/projected/3eb4a050-ebc6-4319-b27f-9c9cce058ec1-kube-api-access-h44j5\") pod \"root-account-create-update-xbdgt\" (UID: \"3eb4a050-ebc6-4319-b27f-9c9cce058ec1\") " pod="openstack/root-account-create-update-xbdgt" Feb 18 19:52:53 crc kubenswrapper[4932]: I0218 19:52:53.573653 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xbdgt" Feb 18 19:52:57 crc kubenswrapper[4932]: I0218 19:52:57.606006 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:52:57 crc kubenswrapper[4932]: I0218 19:52:57.606549 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:52:57 crc kubenswrapper[4932]: I0218 19:52:57.606601 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:52:57 crc kubenswrapper[4932]: I0218 19:52:57.608043 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"435f6d4431c63fe1b1d0a709b03d86681659a5d37fb618d6ab36ba1010fce349"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:52:57 crc kubenswrapper[4932]: I0218 19:52:57.608112 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://435f6d4431c63fe1b1d0a709b03d86681659a5d37fb618d6ab36ba1010fce349" gracePeriod=600 Feb 18 19:52:58 crc kubenswrapper[4932]: I0218 19:52:58.195235 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="435f6d4431c63fe1b1d0a709b03d86681659a5d37fb618d6ab36ba1010fce349" exitCode=0 Feb 18 19:52:58 crc kubenswrapper[4932]: I0218 19:52:58.195294 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"435f6d4431c63fe1b1d0a709b03d86681659a5d37fb618d6ab36ba1010fce349"} Feb 18 19:52:58 crc kubenswrapper[4932]: I0218 19:52:58.195334 4932 scope.go:117] "RemoveContainer" containerID="0796b82991176676a1533452d61ed93202733b7f85192cab295504d343f7c992" Feb 18 19:52:58 crc kubenswrapper[4932]: I0218 19:52:58.815348 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xbdgt"] Feb 18 19:52:58 crc kubenswrapper[4932]: W0218 19:52:58.836371 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3eb4a050_ebc6_4319_b27f_9c9cce058ec1.slice/crio-f292fccc752de47337eef5d251f520660567033b668f7756cde4342332ac7066 WatchSource:0}: Error finding container f292fccc752de47337eef5d251f520660567033b668f7756cde4342332ac7066: Status 404 returned error can't find the container with id f292fccc752de47337eef5d251f520660567033b668f7756cde4342332ac7066 Feb 18 19:52:58 crc kubenswrapper[4932]: I0218 19:52:58.841015 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 18 19:52:59 crc kubenswrapper[4932]: I0218 19:52:59.136909 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 18 19:52:59 crc kubenswrapper[4932]: I0218 19:52:59.204827 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xbdgt" event={"ID":"3eb4a050-ebc6-4319-b27f-9c9cce058ec1","Type":"ContainerStarted","Data":"38fb496c61ec368b9f0d3847ea90156e96e96daa825692bcb6b0867b238ef4ee"} Feb 18 19:52:59 crc kubenswrapper[4932]: I0218 19:52:59.204867 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xbdgt" event={"ID":"3eb4a050-ebc6-4319-b27f-9c9cce058ec1","Type":"ContainerStarted","Data":"f292fccc752de47337eef5d251f520660567033b668f7756cde4342332ac7066"} Feb 18 19:52:59 crc kubenswrapper[4932]: I0218 19:52:59.206571 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5","Type":"ContainerStarted","Data":"b0c4ca69d7b367f202571359f1930a62d621ebd002b1f653a6e0ef9c09429ba2"} Feb 18 19:52:59 crc kubenswrapper[4932]: I0218 19:52:59.210024 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cf98dd42-289f-43fa-b4dc-c6ff814a3c25","Type":"ContainerStarted","Data":"df6328f727f0438e57305c9925166837d62f5032c8dd58e3ace63bdd0cdad46f"} Feb 18 19:52:59 crc kubenswrapper[4932]: I0218 19:52:59.213346 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"691ac26b2e0eb4976dab73dc438ad2163dc0ad731157e8dbe0e2c19541cba856"} Feb 18 19:52:59 crc kubenswrapper[4932]: I0218 19:52:59.226205 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-xbdgt" podStartSLOduration=6.226168404 podStartE2EDuration="6.226168404s" podCreationTimestamp="2026-02-18 19:52:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:52:59.219057668 +0000 UTC m=+1142.801012513" watchObservedRunningTime="2026-02-18 19:52:59.226168404 +0000 UTC m=+1142.808123249" Feb 18 19:52:59 crc kubenswrapper[4932]: I0218 19:52:59.251779 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=11.123133462 podStartE2EDuration="1m12.251761394s" podCreationTimestamp="2026-02-18 19:51:47 +0000 UTC" firstStartedPulling="2026-02-18 19:51:57.4408188 +0000 UTC m=+1081.022773645" lastFinishedPulling="2026-02-18 19:52:58.569446712 +0000 UTC m=+1142.151401577" observedRunningTime="2026-02-18 19:52:59.241806129 +0000 UTC m=+1142.823760984" watchObservedRunningTime="2026-02-18 19:52:59.251761394 +0000 UTC m=+1142.833716239" Feb 18 19:53:00 crc kubenswrapper[4932]: I0218 19:53:00.224569 4932 generic.go:334] "Generic (PLEG): container finished" podID="3eb4a050-ebc6-4319-b27f-9c9cce058ec1" containerID="38fb496c61ec368b9f0d3847ea90156e96e96daa825692bcb6b0867b238ef4ee" exitCode=0 Feb 18 19:53:00 crc kubenswrapper[4932]: I0218 19:53:00.224752 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xbdgt" event={"ID":"3eb4a050-ebc6-4319-b27f-9c9cce058ec1","Type":"ContainerDied","Data":"38fb496c61ec368b9f0d3847ea90156e96e96daa825692bcb6b0867b238ef4ee"} Feb 18 19:53:00 crc kubenswrapper[4932]: I0218 19:53:00.227867 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rl7xx" event={"ID":"1bbf2873-6ca9-4569-b5b6-3003511c02ba","Type":"ContainerStarted","Data":"104353923ef97f2e6933dbfcfbfc2a9125473f1373667e2eb5163afb4316da88"} Feb 18 19:53:00 crc kubenswrapper[4932]: I0218 19:53:00.229599 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5","Type":"ContainerStarted","Data":"e0f5557ac4379d0c6d366e9f385551b3db089d0f37a9e3b044ddf7c3b9791d40"} Feb 18 19:53:00 crc kubenswrapper[4932]: I0218 19:53:00.229652 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5","Type":"ContainerStarted","Data":"0d4d6491dfafa508897f8c85011f34a4de9a81466406275234a2c4a77196ad9e"} Feb 18 19:53:00 crc kubenswrapper[4932]: I0218 19:53:00.294625 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-rl7xx" podStartSLOduration=2.677637061 podStartE2EDuration="15.294594993s" podCreationTimestamp="2026-02-18 19:52:45 +0000 UTC" firstStartedPulling="2026-02-18 19:52:45.980537141 +0000 UTC m=+1129.562491986" lastFinishedPulling="2026-02-18 19:52:58.597495073 +0000 UTC m=+1142.179449918" observedRunningTime="2026-02-18 19:53:00.267300681 +0000 UTC m=+1143.849255536" watchObservedRunningTime="2026-02-18 19:53:00.294594993 +0000 UTC m=+1143.876549878" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.116423 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.241412 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5","Type":"ContainerStarted","Data":"b6f0778a169dc19434921249b3093342769dcc715b273057dabc336fa9eceb1f"} Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.241456 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5","Type":"ContainerStarted","Data":"c3a6ca6a9ed711f3d48080461dc1e16d45096f60bbdde6a4c51ccad3d79b1ae0"} Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.431038 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.670748 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-hvt6h"] Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.672023 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hvt6h" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.695855 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-hvt6h"] Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.750856 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xbdgt" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.761706 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/notifications-rabbitmq-server-0" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.763140 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwdm7\" (UniqueName: \"kubernetes.io/projected/56734660-55cc-463c-89f2-131bc9109dab-kube-api-access-dwdm7\") pod \"barbican-db-create-hvt6h\" (UID: \"56734660-55cc-463c-89f2-131bc9109dab\") " pod="openstack/barbican-db-create-hvt6h" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.763202 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56734660-55cc-463c-89f2-131bc9109dab-operator-scripts\") pod \"barbican-db-create-hvt6h\" (UID: \"56734660-55cc-463c-89f2-131bc9109dab\") " pod="openstack/barbican-db-create-hvt6h" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.773739 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-a65d-account-create-update-chx2v"] Feb 18 19:53:01 crc kubenswrapper[4932]: E0218 19:53:01.774072 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb4a050-ebc6-4319-b27f-9c9cce058ec1" containerName="mariadb-account-create-update" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.774088 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb4a050-ebc6-4319-b27f-9c9cce058ec1" containerName="mariadb-account-create-update" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.774266 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eb4a050-ebc6-4319-b27f-9c9cce058ec1" containerName="mariadb-account-create-update" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.774772 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a65d-account-create-update-chx2v" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.776415 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.803333 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-a65d-account-create-update-chx2v"] Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.863806 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h44j5\" (UniqueName: \"kubernetes.io/projected/3eb4a050-ebc6-4319-b27f-9c9cce058ec1-kube-api-access-h44j5\") pod \"3eb4a050-ebc6-4319-b27f-9c9cce058ec1\" (UID: \"3eb4a050-ebc6-4319-b27f-9c9cce058ec1\") " Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.863907 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3eb4a050-ebc6-4319-b27f-9c9cce058ec1-operator-scripts\") pod \"3eb4a050-ebc6-4319-b27f-9c9cce058ec1\" (UID: \"3eb4a050-ebc6-4319-b27f-9c9cce058ec1\") " Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.864186 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac9c39c2-bf9e-4f11-b37f-17089fce08e7-operator-scripts\") pod \"barbican-a65d-account-create-update-chx2v\" (UID: \"ac9c39c2-bf9e-4f11-b37f-17089fce08e7\") " pod="openstack/barbican-a65d-account-create-update-chx2v" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.864236 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64cqb\" (UniqueName: \"kubernetes.io/projected/ac9c39c2-bf9e-4f11-b37f-17089fce08e7-kube-api-access-64cqb\") pod \"barbican-a65d-account-create-update-chx2v\" (UID: \"ac9c39c2-bf9e-4f11-b37f-17089fce08e7\") " pod="openstack/barbican-a65d-account-create-update-chx2v" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.864265 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwdm7\" (UniqueName: \"kubernetes.io/projected/56734660-55cc-463c-89f2-131bc9109dab-kube-api-access-dwdm7\") pod \"barbican-db-create-hvt6h\" (UID: \"56734660-55cc-463c-89f2-131bc9109dab\") " pod="openstack/barbican-db-create-hvt6h" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.864314 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56734660-55cc-463c-89f2-131bc9109dab-operator-scripts\") pod \"barbican-db-create-hvt6h\" (UID: \"56734660-55cc-463c-89f2-131bc9109dab\") " pod="openstack/barbican-db-create-hvt6h" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.865298 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eb4a050-ebc6-4319-b27f-9c9cce058ec1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3eb4a050-ebc6-4319-b27f-9c9cce058ec1" (UID: "3eb4a050-ebc6-4319-b27f-9c9cce058ec1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.865735 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56734660-55cc-463c-89f2-131bc9109dab-operator-scripts\") pod \"barbican-db-create-hvt6h\" (UID: \"56734660-55cc-463c-89f2-131bc9109dab\") " pod="openstack/barbican-db-create-hvt6h" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.884613 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eb4a050-ebc6-4319-b27f-9c9cce058ec1-kube-api-access-h44j5" (OuterVolumeSpecName: "kube-api-access-h44j5") pod "3eb4a050-ebc6-4319-b27f-9c9cce058ec1" (UID: "3eb4a050-ebc6-4319-b27f-9c9cce058ec1"). InnerVolumeSpecName "kube-api-access-h44j5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.887967 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwdm7\" (UniqueName: \"kubernetes.io/projected/56734660-55cc-463c-89f2-131bc9109dab-kube-api-access-dwdm7\") pod \"barbican-db-create-hvt6h\" (UID: \"56734660-55cc-463c-89f2-131bc9109dab\") " pod="openstack/barbican-db-create-hvt6h" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.967923 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-hn6qq"] Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.969051 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hn6qq" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.969616 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac9c39c2-bf9e-4f11-b37f-17089fce08e7-operator-scripts\") pod \"barbican-a65d-account-create-update-chx2v\" (UID: \"ac9c39c2-bf9e-4f11-b37f-17089fce08e7\") " pod="openstack/barbican-a65d-account-create-update-chx2v" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.969712 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64cqb\" (UniqueName: \"kubernetes.io/projected/ac9c39c2-bf9e-4f11-b37f-17089fce08e7-kube-api-access-64cqb\") pod \"barbican-a65d-account-create-update-chx2v\" (UID: \"ac9c39c2-bf9e-4f11-b37f-17089fce08e7\") " pod="openstack/barbican-a65d-account-create-update-chx2v" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.969883 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3eb4a050-ebc6-4319-b27f-9c9cce058ec1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.969930 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h44j5\" (UniqueName: \"kubernetes.io/projected/3eb4a050-ebc6-4319-b27f-9c9cce058ec1-kube-api-access-h44j5\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.971853 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac9c39c2-bf9e-4f11-b37f-17089fce08e7-operator-scripts\") pod \"barbican-a65d-account-create-update-chx2v\" (UID: \"ac9c39c2-bf9e-4f11-b37f-17089fce08e7\") " pod="openstack/barbican-a65d-account-create-update-chx2v" Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.989044 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-hn6qq"] Feb 18 19:53:01 crc kubenswrapper[4932]: I0218 19:53:01.996885 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64cqb\" (UniqueName: \"kubernetes.io/projected/ac9c39c2-bf9e-4f11-b37f-17089fce08e7-kube-api-access-64cqb\") pod \"barbican-a65d-account-create-update-chx2v\" (UID: \"ac9c39c2-bf9e-4f11-b37f-17089fce08e7\") " pod="openstack/barbican-a65d-account-create-update-chx2v" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.013559 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-5bd9-account-create-update-7tv8h"] Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.025307 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5bd9-account-create-update-7tv8h" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.030872 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.064642 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5bd9-account-create-update-7tv8h"] Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.073266 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzrrj\" (UniqueName: \"kubernetes.io/projected/7680bf6b-efd6-452a-8900-09cf55b203ff-kube-api-access-mzrrj\") pod \"cinder-5bd9-account-create-update-7tv8h\" (UID: \"7680bf6b-efd6-452a-8900-09cf55b203ff\") " pod="openstack/cinder-5bd9-account-create-update-7tv8h" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.073318 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7680bf6b-efd6-452a-8900-09cf55b203ff-operator-scripts\") pod \"cinder-5bd9-account-create-update-7tv8h\" (UID: \"7680bf6b-efd6-452a-8900-09cf55b203ff\") " pod="openstack/cinder-5bd9-account-create-update-7tv8h" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.073359 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7khwn\" (UniqueName: \"kubernetes.io/projected/f7988cea-6aa8-4552-8965-04b417c91831-kube-api-access-7khwn\") pod \"cinder-db-create-hn6qq\" (UID: \"f7988cea-6aa8-4552-8965-04b417c91831\") " pod="openstack/cinder-db-create-hn6qq" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.073426 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7988cea-6aa8-4552-8965-04b417c91831-operator-scripts\") pod \"cinder-db-create-hn6qq\" (UID: \"f7988cea-6aa8-4552-8965-04b417c91831\") " pod="openstack/cinder-db-create-hn6qq" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.073851 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hvt6h" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.101596 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-h526s"] Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.102679 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a65d-account-create-update-chx2v" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.102769 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-h526s" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.113573 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-h526s"] Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.113718 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sk7x7" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.113818 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.114004 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.114125 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.175708 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5sds\" (UniqueName: \"kubernetes.io/projected/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-kube-api-access-r5sds\") pod \"keystone-db-sync-h526s\" (UID: \"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a\") " pod="openstack/keystone-db-sync-h526s" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.176104 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-config-data\") pod \"keystone-db-sync-h526s\" (UID: \"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a\") " pod="openstack/keystone-db-sync-h526s" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.176156 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzrrj\" (UniqueName: \"kubernetes.io/projected/7680bf6b-efd6-452a-8900-09cf55b203ff-kube-api-access-mzrrj\") pod \"cinder-5bd9-account-create-update-7tv8h\" (UID: \"7680bf6b-efd6-452a-8900-09cf55b203ff\") " pod="openstack/cinder-5bd9-account-create-update-7tv8h" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.176197 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7680bf6b-efd6-452a-8900-09cf55b203ff-operator-scripts\") pod \"cinder-5bd9-account-create-update-7tv8h\" (UID: \"7680bf6b-efd6-452a-8900-09cf55b203ff\") " pod="openstack/cinder-5bd9-account-create-update-7tv8h" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.176235 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7khwn\" (UniqueName: \"kubernetes.io/projected/f7988cea-6aa8-4552-8965-04b417c91831-kube-api-access-7khwn\") pod \"cinder-db-create-hn6qq\" (UID: \"f7988cea-6aa8-4552-8965-04b417c91831\") " pod="openstack/cinder-db-create-hn6qq" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.176266 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-combined-ca-bundle\") pod \"keystone-db-sync-h526s\" (UID: \"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a\") " pod="openstack/keystone-db-sync-h526s" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.176317 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7988cea-6aa8-4552-8965-04b417c91831-operator-scripts\") pod \"cinder-db-create-hn6qq\" (UID: \"f7988cea-6aa8-4552-8965-04b417c91831\") " pod="openstack/cinder-db-create-hn6qq" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.177006 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7988cea-6aa8-4552-8965-04b417c91831-operator-scripts\") pod \"cinder-db-create-hn6qq\" (UID: \"f7988cea-6aa8-4552-8965-04b417c91831\") " pod="openstack/cinder-db-create-hn6qq" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.177749 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7680bf6b-efd6-452a-8900-09cf55b203ff-operator-scripts\") pod \"cinder-5bd9-account-create-update-7tv8h\" (UID: \"7680bf6b-efd6-452a-8900-09cf55b203ff\") " pod="openstack/cinder-5bd9-account-create-update-7tv8h" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.194721 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7khwn\" (UniqueName: \"kubernetes.io/projected/f7988cea-6aa8-4552-8965-04b417c91831-kube-api-access-7khwn\") pod \"cinder-db-create-hn6qq\" (UID: \"f7988cea-6aa8-4552-8965-04b417c91831\") " pod="openstack/cinder-db-create-hn6qq" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.194718 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzrrj\" (UniqueName: \"kubernetes.io/projected/7680bf6b-efd6-452a-8900-09cf55b203ff-kube-api-access-mzrrj\") pod \"cinder-5bd9-account-create-update-7tv8h\" (UID: \"7680bf6b-efd6-452a-8900-09cf55b203ff\") " pod="openstack/cinder-5bd9-account-create-update-7tv8h" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.272947 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5","Type":"ContainerStarted","Data":"bbee0ef35cd50681f80a2e01b2c6ec4191424d86c97c1f6ce0c7ff60a9945be4"} Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.272985 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5","Type":"ContainerStarted","Data":"709a384ce96f65611223bccb435327dbaa1ac6245c2d8052bbba507d5da472de"} Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.272995 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5","Type":"ContainerStarted","Data":"d93a37c9f3e4b5851f721ec371db06711dc32dc7d4eb50f229ad571de6bd5ab7"} Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.276540 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xbdgt" event={"ID":"3eb4a050-ebc6-4319-b27f-9c9cce058ec1","Type":"ContainerDied","Data":"f292fccc752de47337eef5d251f520660567033b668f7756cde4342332ac7066"} Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.276567 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f292fccc752de47337eef5d251f520660567033b668f7756cde4342332ac7066" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.276629 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xbdgt" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.280114 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-combined-ca-bundle\") pod \"keystone-db-sync-h526s\" (UID: \"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a\") " pod="openstack/keystone-db-sync-h526s" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.280213 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5sds\" (UniqueName: \"kubernetes.io/projected/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-kube-api-access-r5sds\") pod \"keystone-db-sync-h526s\" (UID: \"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a\") " pod="openstack/keystone-db-sync-h526s" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.280242 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-config-data\") pod \"keystone-db-sync-h526s\" (UID: \"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a\") " pod="openstack/keystone-db-sync-h526s" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.293764 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hn6qq" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.296128 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-combined-ca-bundle\") pod \"keystone-db-sync-h526s\" (UID: \"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a\") " pod="openstack/keystone-db-sync-h526s" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.296563 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-config-data\") pod \"keystone-db-sync-h526s\" (UID: \"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a\") " pod="openstack/keystone-db-sync-h526s" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.323782 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5sds\" (UniqueName: \"kubernetes.io/projected/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-kube-api-access-r5sds\") pod \"keystone-db-sync-h526s\" (UID: \"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a\") " pod="openstack/keystone-db-sync-h526s" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.385281 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5bd9-account-create-update-7tv8h" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.444101 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-h526s" Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.538814 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-a65d-account-create-update-chx2v"] Feb 18 19:53:02 crc kubenswrapper[4932]: W0218 19:53:02.542912 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac9c39c2_bf9e_4f11_b37f_17089fce08e7.slice/crio-51583244c9cd178243965b225a45c783c841e20d65edf2a86b3f9db92e439266 WatchSource:0}: Error finding container 51583244c9cd178243965b225a45c783c841e20d65edf2a86b3f9db92e439266: Status 404 returned error can't find the container with id 51583244c9cd178243965b225a45c783c841e20d65edf2a86b3f9db92e439266 Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.676958 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-hvt6h"] Feb 18 19:53:02 crc kubenswrapper[4932]: I0218 19:53:02.930807 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-hn6qq"] Feb 18 19:53:02 crc kubenswrapper[4932]: W0218 19:53:02.939255 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7988cea_6aa8_4552_8965_04b417c91831.slice/crio-d81c7286fbf08cc79bae9df6210ed5ad98c0dfda1ac48aec2223ff4bd51a0816 WatchSource:0}: Error finding container d81c7286fbf08cc79bae9df6210ed5ad98c0dfda1ac48aec2223ff4bd51a0816: Status 404 returned error can't find the container with id d81c7286fbf08cc79bae9df6210ed5ad98c0dfda1ac48aec2223ff4bd51a0816 Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.097269 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5bd9-account-create-update-7tv8h"] Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.156972 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-h526s"] Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.303023 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-h526s" event={"ID":"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a","Type":"ContainerStarted","Data":"43fb9c3d5607cbfcdb2e71bfe2fe586c4b79577442d0345f45f7e3f3cb5eb6e7"} Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.318822 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hn6qq" event={"ID":"f7988cea-6aa8-4552-8965-04b417c91831","Type":"ContainerStarted","Data":"03cc21b056f77810add58b5621bb79299b2f95efe33228e5665e27461f3e50f3"} Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.318864 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hn6qq" event={"ID":"f7988cea-6aa8-4552-8965-04b417c91831","Type":"ContainerStarted","Data":"d81c7286fbf08cc79bae9df6210ed5ad98c0dfda1ac48aec2223ff4bd51a0816"} Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.339129 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hvt6h" event={"ID":"56734660-55cc-463c-89f2-131bc9109dab","Type":"ContainerStarted","Data":"979fb0febd6062fe5161812c56f74561bb0c81dc6ed2e8e26cb348d3275186d6"} Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.339191 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hvt6h" event={"ID":"56734660-55cc-463c-89f2-131bc9109dab","Type":"ContainerStarted","Data":"46b0c6beb77ba63cc4fb7bf59b078265e0be2b3e41c391faa1cfacce870602a0"} Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.347306 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5bd9-account-create-update-7tv8h" event={"ID":"7680bf6b-efd6-452a-8900-09cf55b203ff","Type":"ContainerStarted","Data":"c049a4b17cd32b5f24bb7e9e3ef0f21dbe83e94bb0e6b01b6cebe4cc220b64e4"} Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.349251 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-hn6qq" podStartSLOduration=2.3492288009999998 podStartE2EDuration="2.349228801s" podCreationTimestamp="2026-02-18 19:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:53:03.333490543 +0000 UTC m=+1146.915445378" watchObservedRunningTime="2026-02-18 19:53:03.349228801 +0000 UTC m=+1146.931183646" Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.352950 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5","Type":"ContainerStarted","Data":"99a9b400b3f548a25cab483817236360ae6c0770b440b7691ce80970079bc52e"} Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.360364 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a65d-account-create-update-chx2v" event={"ID":"ac9c39c2-bf9e-4f11-b37f-17089fce08e7","Type":"ContainerStarted","Data":"48cdc7bd0a5fa5affdc3d044cfe0ccc940cdde09dc40fd9f4253e5cd4c996f16"} Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.360406 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a65d-account-create-update-chx2v" event={"ID":"ac9c39c2-bf9e-4f11-b37f-17089fce08e7","Type":"ContainerStarted","Data":"51583244c9cd178243965b225a45c783c841e20d65edf2a86b3f9db92e439266"} Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.371588 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-hvt6h" podStartSLOduration=2.371569621 podStartE2EDuration="2.371569621s" podCreationTimestamp="2026-02-18 19:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:53:03.364653521 +0000 UTC m=+1146.946608366" watchObservedRunningTime="2026-02-18 19:53:03.371569621 +0000 UTC m=+1146.953524466" Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.398229 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-a65d-account-create-update-chx2v" podStartSLOduration=2.398210107 podStartE2EDuration="2.398210107s" podCreationTimestamp="2026-02-18 19:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:53:03.38168237 +0000 UTC m=+1146.963637205" watchObservedRunningTime="2026-02-18 19:53:03.398210107 +0000 UTC m=+1146.980164952" Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.495968 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.496020 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.498458 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:03 crc kubenswrapper[4932]: I0218 19:53:03.525755 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-5bd9-account-create-update-7tv8h" podStartSLOduration=2.525735107 podStartE2EDuration="2.525735107s" podCreationTimestamp="2026-02-18 19:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:53:03.415563854 +0000 UTC m=+1146.997518699" watchObservedRunningTime="2026-02-18 19:53:03.525735107 +0000 UTC m=+1147.107689962" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.384459 4932 generic.go:334] "Generic (PLEG): container finished" podID="56734660-55cc-463c-89f2-131bc9109dab" containerID="979fb0febd6062fe5161812c56f74561bb0c81dc6ed2e8e26cb348d3275186d6" exitCode=0 Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.384718 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hvt6h" event={"ID":"56734660-55cc-463c-89f2-131bc9109dab","Type":"ContainerDied","Data":"979fb0febd6062fe5161812c56f74561bb0c81dc6ed2e8e26cb348d3275186d6"} Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.388961 4932 generic.go:334] "Generic (PLEG): container finished" podID="7680bf6b-efd6-452a-8900-09cf55b203ff" containerID="0d634b73a958b2e21485770f0ca87b0cc9a8038deca230cf324c0047e0c7f89e" exitCode=0 Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.389019 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5bd9-account-create-update-7tv8h" event={"ID":"7680bf6b-efd6-452a-8900-09cf55b203ff","Type":"ContainerDied","Data":"0d634b73a958b2e21485770f0ca87b0cc9a8038deca230cf324c0047e0c7f89e"} Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.393489 4932 generic.go:334] "Generic (PLEG): container finished" podID="ac9c39c2-bf9e-4f11-b37f-17089fce08e7" containerID="48cdc7bd0a5fa5affdc3d044cfe0ccc940cdde09dc40fd9f4253e5cd4c996f16" exitCode=0 Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.393827 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a65d-account-create-update-chx2v" event={"ID":"ac9c39c2-bf9e-4f11-b37f-17089fce08e7","Type":"ContainerDied","Data":"48cdc7bd0a5fa5affdc3d044cfe0ccc940cdde09dc40fd9f4253e5cd4c996f16"} Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.408039 4932 generic.go:334] "Generic (PLEG): container finished" podID="f7988cea-6aa8-4552-8965-04b417c91831" containerID="03cc21b056f77810add58b5621bb79299b2f95efe33228e5665e27461f3e50f3" exitCode=0 Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.409369 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hn6qq" event={"ID":"f7988cea-6aa8-4552-8965-04b417c91831","Type":"ContainerDied","Data":"03cc21b056f77810add58b5621bb79299b2f95efe33228e5665e27461f3e50f3"} Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.412311 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.732937 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-hbs76"] Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.734186 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-hbs76" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.744963 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-hbs76"] Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.835758 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-53f4-account-create-update-mh2bq"] Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.836756 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-53f4-account-create-update-mh2bq" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.841492 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.847789 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b9deee6-7804-492e-88c9-147087152416-operator-scripts\") pod \"neutron-db-create-hbs76\" (UID: \"0b9deee6-7804-492e-88c9-147087152416\") " pod="openstack/neutron-db-create-hbs76" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.847861 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvhxw\" (UniqueName: \"kubernetes.io/projected/0b9deee6-7804-492e-88c9-147087152416-kube-api-access-nvhxw\") pod \"neutron-db-create-hbs76\" (UID: \"0b9deee6-7804-492e-88c9-147087152416\") " pod="openstack/neutron-db-create-hbs76" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.865227 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-53f4-account-create-update-mh2bq"] Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.895271 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-4ghxf"] Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.896336 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-4ghxf" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.900597 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-s5bnj" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.900794 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.940587 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-4ghxf"] Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.949047 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-config-data\") pod \"watcher-db-sync-4ghxf\" (UID: \"bc05154b-7f25-4fb1-8293-9aba06523c37\") " pod="openstack/watcher-db-sync-4ghxf" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.949086 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhrgc\" (UniqueName: \"kubernetes.io/projected/ca3578cc-7bd4-4e77-8b29-bbb38f588260-kube-api-access-xhrgc\") pod \"neutron-53f4-account-create-update-mh2bq\" (UID: \"ca3578cc-7bd4-4e77-8b29-bbb38f588260\") " pod="openstack/neutron-53f4-account-create-update-mh2bq" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.949122 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b9deee6-7804-492e-88c9-147087152416-operator-scripts\") pod \"neutron-db-create-hbs76\" (UID: \"0b9deee6-7804-492e-88c9-147087152416\") " pod="openstack/neutron-db-create-hbs76" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.949309 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvhxw\" (UniqueName: \"kubernetes.io/projected/0b9deee6-7804-492e-88c9-147087152416-kube-api-access-nvhxw\") pod \"neutron-db-create-hbs76\" (UID: \"0b9deee6-7804-492e-88c9-147087152416\") " pod="openstack/neutron-db-create-hbs76" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.949380 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-db-sync-config-data\") pod \"watcher-db-sync-4ghxf\" (UID: \"bc05154b-7f25-4fb1-8293-9aba06523c37\") " pod="openstack/watcher-db-sync-4ghxf" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.949447 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca3578cc-7bd4-4e77-8b29-bbb38f588260-operator-scripts\") pod \"neutron-53f4-account-create-update-mh2bq\" (UID: \"ca3578cc-7bd4-4e77-8b29-bbb38f588260\") " pod="openstack/neutron-53f4-account-create-update-mh2bq" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.949564 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2htl\" (UniqueName: \"kubernetes.io/projected/bc05154b-7f25-4fb1-8293-9aba06523c37-kube-api-access-s2htl\") pod \"watcher-db-sync-4ghxf\" (UID: \"bc05154b-7f25-4fb1-8293-9aba06523c37\") " pod="openstack/watcher-db-sync-4ghxf" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.949970 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b9deee6-7804-492e-88c9-147087152416-operator-scripts\") pod \"neutron-db-create-hbs76\" (UID: \"0b9deee6-7804-492e-88c9-147087152416\") " pod="openstack/neutron-db-create-hbs76" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.950459 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-combined-ca-bundle\") pod \"watcher-db-sync-4ghxf\" (UID: \"bc05154b-7f25-4fb1-8293-9aba06523c37\") " pod="openstack/watcher-db-sync-4ghxf" Feb 18 19:53:04 crc kubenswrapper[4932]: I0218 19:53:04.972921 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvhxw\" (UniqueName: \"kubernetes.io/projected/0b9deee6-7804-492e-88c9-147087152416-kube-api-access-nvhxw\") pod \"neutron-db-create-hbs76\" (UID: \"0b9deee6-7804-492e-88c9-147087152416\") " pod="openstack/neutron-db-create-hbs76" Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.051572 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-combined-ca-bundle\") pod \"watcher-db-sync-4ghxf\" (UID: \"bc05154b-7f25-4fb1-8293-9aba06523c37\") " pod="openstack/watcher-db-sync-4ghxf" Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.051671 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-config-data\") pod \"watcher-db-sync-4ghxf\" (UID: \"bc05154b-7f25-4fb1-8293-9aba06523c37\") " pod="openstack/watcher-db-sync-4ghxf" Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.051698 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhrgc\" (UniqueName: \"kubernetes.io/projected/ca3578cc-7bd4-4e77-8b29-bbb38f588260-kube-api-access-xhrgc\") pod \"neutron-53f4-account-create-update-mh2bq\" (UID: \"ca3578cc-7bd4-4e77-8b29-bbb38f588260\") " pod="openstack/neutron-53f4-account-create-update-mh2bq" Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.051763 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-db-sync-config-data\") pod \"watcher-db-sync-4ghxf\" (UID: \"bc05154b-7f25-4fb1-8293-9aba06523c37\") " pod="openstack/watcher-db-sync-4ghxf" Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.051803 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca3578cc-7bd4-4e77-8b29-bbb38f588260-operator-scripts\") pod \"neutron-53f4-account-create-update-mh2bq\" (UID: \"ca3578cc-7bd4-4e77-8b29-bbb38f588260\") " pod="openstack/neutron-53f4-account-create-update-mh2bq" Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.051853 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2htl\" (UniqueName: \"kubernetes.io/projected/bc05154b-7f25-4fb1-8293-9aba06523c37-kube-api-access-s2htl\") pod \"watcher-db-sync-4ghxf\" (UID: \"bc05154b-7f25-4fb1-8293-9aba06523c37\") " pod="openstack/watcher-db-sync-4ghxf" Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.053106 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca3578cc-7bd4-4e77-8b29-bbb38f588260-operator-scripts\") pod \"neutron-53f4-account-create-update-mh2bq\" (UID: \"ca3578cc-7bd4-4e77-8b29-bbb38f588260\") " pod="openstack/neutron-53f4-account-create-update-mh2bq" Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.055818 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-config-data\") pod \"watcher-db-sync-4ghxf\" (UID: \"bc05154b-7f25-4fb1-8293-9aba06523c37\") " pod="openstack/watcher-db-sync-4ghxf" Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.055960 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-hbs76" Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.056583 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-db-sync-config-data\") pod \"watcher-db-sync-4ghxf\" (UID: \"bc05154b-7f25-4fb1-8293-9aba06523c37\") " pod="openstack/watcher-db-sync-4ghxf" Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.058808 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-combined-ca-bundle\") pod \"watcher-db-sync-4ghxf\" (UID: \"bc05154b-7f25-4fb1-8293-9aba06523c37\") " pod="openstack/watcher-db-sync-4ghxf" Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.070856 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2htl\" (UniqueName: \"kubernetes.io/projected/bc05154b-7f25-4fb1-8293-9aba06523c37-kube-api-access-s2htl\") pod \"watcher-db-sync-4ghxf\" (UID: \"bc05154b-7f25-4fb1-8293-9aba06523c37\") " pod="openstack/watcher-db-sync-4ghxf" Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.071657 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhrgc\" (UniqueName: \"kubernetes.io/projected/ca3578cc-7bd4-4e77-8b29-bbb38f588260-kube-api-access-xhrgc\") pod \"neutron-53f4-account-create-update-mh2bq\" (UID: \"ca3578cc-7bd4-4e77-8b29-bbb38f588260\") " pod="openstack/neutron-53f4-account-create-update-mh2bq" Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.209352 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-53f4-account-create-update-mh2bq" Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.239242 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-4ghxf" Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.461835 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5","Type":"ContainerStarted","Data":"56ce23bb6103e557429dc8690b553377257cd936a4bb509e20a8c92ae8b56a22"} Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.462387 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5","Type":"ContainerStarted","Data":"09903fc1344e02ff0c9b44820c5f554d569cf63ffb8c7ab34e7b724c0902da20"} Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.569532 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-hbs76"] Feb 18 19:53:05 crc kubenswrapper[4932]: I0218 19:53:05.772410 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-53f4-account-create-update-mh2bq"] Feb 18 19:53:05 crc kubenswrapper[4932]: W0218 19:53:05.777197 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca3578cc_7bd4_4e77_8b29_bbb38f588260.slice/crio-43c082521df80e8aaf2a62b7e035919e319670ae64cf37a87221a05f6dd28385 WatchSource:0}: Error finding container 43c082521df80e8aaf2a62b7e035919e319670ae64cf37a87221a05f6dd28385: Status 404 returned error can't find the container with id 43c082521df80e8aaf2a62b7e035919e319670ae64cf37a87221a05f6dd28385 Feb 18 19:53:06 crc kubenswrapper[4932]: I0218 19:53:06.122340 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5bd9-account-create-update-7tv8h" Feb 18 19:53:06 crc kubenswrapper[4932]: I0218 19:53:06.131249 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hvt6h" Feb 18 19:53:06 crc kubenswrapper[4932]: I0218 19:53:06.174018 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwdm7\" (UniqueName: \"kubernetes.io/projected/56734660-55cc-463c-89f2-131bc9109dab-kube-api-access-dwdm7\") pod \"56734660-55cc-463c-89f2-131bc9109dab\" (UID: \"56734660-55cc-463c-89f2-131bc9109dab\") " Feb 18 19:53:06 crc kubenswrapper[4932]: I0218 19:53:06.174088 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzrrj\" (UniqueName: \"kubernetes.io/projected/7680bf6b-efd6-452a-8900-09cf55b203ff-kube-api-access-mzrrj\") pod \"7680bf6b-efd6-452a-8900-09cf55b203ff\" (UID: \"7680bf6b-efd6-452a-8900-09cf55b203ff\") " Feb 18 19:53:06 crc kubenswrapper[4932]: I0218 19:53:06.174153 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7680bf6b-efd6-452a-8900-09cf55b203ff-operator-scripts\") pod \"7680bf6b-efd6-452a-8900-09cf55b203ff\" (UID: \"7680bf6b-efd6-452a-8900-09cf55b203ff\") " Feb 18 19:53:06 crc kubenswrapper[4932]: I0218 19:53:06.174236 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56734660-55cc-463c-89f2-131bc9109dab-operator-scripts\") pod \"56734660-55cc-463c-89f2-131bc9109dab\" (UID: \"56734660-55cc-463c-89f2-131bc9109dab\") " Feb 18 19:53:06 crc kubenswrapper[4932]: I0218 19:53:06.175785 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56734660-55cc-463c-89f2-131bc9109dab-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "56734660-55cc-463c-89f2-131bc9109dab" (UID: "56734660-55cc-463c-89f2-131bc9109dab"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:06 crc kubenswrapper[4932]: I0218 19:53:06.176586 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-4ghxf"] Feb 18 19:53:06 crc kubenswrapper[4932]: I0218 19:53:06.176859 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7680bf6b-efd6-452a-8900-09cf55b203ff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7680bf6b-efd6-452a-8900-09cf55b203ff" (UID: "7680bf6b-efd6-452a-8900-09cf55b203ff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:06 crc kubenswrapper[4932]: I0218 19:53:06.182500 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56734660-55cc-463c-89f2-131bc9109dab-kube-api-access-dwdm7" (OuterVolumeSpecName: "kube-api-access-dwdm7") pod "56734660-55cc-463c-89f2-131bc9109dab" (UID: "56734660-55cc-463c-89f2-131bc9109dab"). InnerVolumeSpecName "kube-api-access-dwdm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:06 crc kubenswrapper[4932]: I0218 19:53:06.189359 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7680bf6b-efd6-452a-8900-09cf55b203ff-kube-api-access-mzrrj" (OuterVolumeSpecName: "kube-api-access-mzrrj") pod "7680bf6b-efd6-452a-8900-09cf55b203ff" (UID: "7680bf6b-efd6-452a-8900-09cf55b203ff"). InnerVolumeSpecName "kube-api-access-mzrrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:06 crc kubenswrapper[4932]: I0218 19:53:06.278406 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56734660-55cc-463c-89f2-131bc9109dab-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:06 crc kubenswrapper[4932]: I0218 19:53:06.278443 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwdm7\" (UniqueName: \"kubernetes.io/projected/56734660-55cc-463c-89f2-131bc9109dab-kube-api-access-dwdm7\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:06 crc kubenswrapper[4932]: I0218 19:53:06.278455 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzrrj\" (UniqueName: \"kubernetes.io/projected/7680bf6b-efd6-452a-8900-09cf55b203ff-kube-api-access-mzrrj\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:06 crc kubenswrapper[4932]: I0218 19:53:06.278464 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7680bf6b-efd6-452a-8900-09cf55b203ff-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.379572 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hn6qq" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.423338 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a65d-account-create-update-chx2v" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.475696 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-4ghxf" event={"ID":"bc05154b-7f25-4fb1-8293-9aba06523c37","Type":"ContainerStarted","Data":"15675a36136757370796fc216004ca775eac02c9effd24c85dfe90820b2828ae"} Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.479920 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-hbs76" event={"ID":"0b9deee6-7804-492e-88c9-147087152416","Type":"ContainerStarted","Data":"2a62cc7c92b0f61fc993f04377e1428679cd22afc955b4da72b0e6e2d00eb682"} Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.479950 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-hbs76" event={"ID":"0b9deee6-7804-492e-88c9-147087152416","Type":"ContainerStarted","Data":"1ad029f86ec5f0d0a06dcaedd3663dce48eadadc46ce509227278c4f119388a2"} Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.494738 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hn6qq" event={"ID":"f7988cea-6aa8-4552-8965-04b417c91831","Type":"ContainerDied","Data":"d81c7286fbf08cc79bae9df6210ed5ad98c0dfda1ac48aec2223ff4bd51a0816"} Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.494765 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d81c7286fbf08cc79bae9df6210ed5ad98c0dfda1ac48aec2223ff4bd51a0816" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.494822 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hn6qq" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.499544 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-hbs76" podStartSLOduration=2.499531085 podStartE2EDuration="2.499531085s" podCreationTimestamp="2026-02-18 19:53:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:53:06.49733142 +0000 UTC m=+1150.079286265" watchObservedRunningTime="2026-02-18 19:53:06.499531085 +0000 UTC m=+1150.081485930" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.518839 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hvt6h" event={"ID":"56734660-55cc-463c-89f2-131bc9109dab","Type":"ContainerDied","Data":"46b0c6beb77ba63cc4fb7bf59b078265e0be2b3e41c391faa1cfacce870602a0"} Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.519086 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46b0c6beb77ba63cc4fb7bf59b078265e0be2b3e41c391faa1cfacce870602a0" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.519141 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hvt6h" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.532062 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5bd9-account-create-update-7tv8h" event={"ID":"7680bf6b-efd6-452a-8900-09cf55b203ff","Type":"ContainerDied","Data":"c049a4b17cd32b5f24bb7e9e3ef0f21dbe83e94bb0e6b01b6cebe4cc220b64e4"} Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.532092 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c049a4b17cd32b5f24bb7e9e3ef0f21dbe83e94bb0e6b01b6cebe4cc220b64e4" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.532101 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5bd9-account-create-update-7tv8h" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.588746 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7khwn\" (UniqueName: \"kubernetes.io/projected/f7988cea-6aa8-4552-8965-04b417c91831-kube-api-access-7khwn\") pod \"f7988cea-6aa8-4552-8965-04b417c91831\" (UID: \"f7988cea-6aa8-4552-8965-04b417c91831\") " Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.588929 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac9c39c2-bf9e-4f11-b37f-17089fce08e7-operator-scripts\") pod \"ac9c39c2-bf9e-4f11-b37f-17089fce08e7\" (UID: \"ac9c39c2-bf9e-4f11-b37f-17089fce08e7\") " Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.588962 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7988cea-6aa8-4552-8965-04b417c91831-operator-scripts\") pod \"f7988cea-6aa8-4552-8965-04b417c91831\" (UID: \"f7988cea-6aa8-4552-8965-04b417c91831\") " Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.588981 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64cqb\" (UniqueName: \"kubernetes.io/projected/ac9c39c2-bf9e-4f11-b37f-17089fce08e7-kube-api-access-64cqb\") pod \"ac9c39c2-bf9e-4f11-b37f-17089fce08e7\" (UID: \"ac9c39c2-bf9e-4f11-b37f-17089fce08e7\") " Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.590495 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7988cea-6aa8-4552-8965-04b417c91831-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f7988cea-6aa8-4552-8965-04b417c91831" (UID: "f7988cea-6aa8-4552-8965-04b417c91831"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.590747 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac9c39c2-bf9e-4f11-b37f-17089fce08e7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ac9c39c2-bf9e-4f11-b37f-17089fce08e7" (UID: "ac9c39c2-bf9e-4f11-b37f-17089fce08e7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.600584 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7988cea-6aa8-4552-8965-04b417c91831-kube-api-access-7khwn" (OuterVolumeSpecName: "kube-api-access-7khwn") pod "f7988cea-6aa8-4552-8965-04b417c91831" (UID: "f7988cea-6aa8-4552-8965-04b417c91831"). InnerVolumeSpecName "kube-api-access-7khwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.608374 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac9c39c2-bf9e-4f11-b37f-17089fce08e7-kube-api-access-64cqb" (OuterVolumeSpecName: "kube-api-access-64cqb") pod "ac9c39c2-bf9e-4f11-b37f-17089fce08e7" (UID: "ac9c39c2-bf9e-4f11-b37f-17089fce08e7"). InnerVolumeSpecName "kube-api-access-64cqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.634389 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5","Type":"ContainerStarted","Data":"8c9bc1e6b51378c96aa83821fefc43a6edeb3618c4359e3c31206c5b84643c34"} Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.634431 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5","Type":"ContainerStarted","Data":"91038b7abd4dc858aba34a9e38d41f405569102d713f7a7fd829187dca7a23ee"} Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.634439 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5","Type":"ContainerStarted","Data":"d86295dcb2fcf3c7ce9e6f16518bddc225d30264891a801a9d6ec00b3e315818"} Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.637842 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a65d-account-create-update-chx2v" event={"ID":"ac9c39c2-bf9e-4f11-b37f-17089fce08e7","Type":"ContainerDied","Data":"51583244c9cd178243965b225a45c783c841e20d65edf2a86b3f9db92e439266"} Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.637883 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51583244c9cd178243965b225a45c783c841e20d65edf2a86b3f9db92e439266" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.637891 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a65d-account-create-update-chx2v" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.641355 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-53f4-account-create-update-mh2bq" event={"ID":"ca3578cc-7bd4-4e77-8b29-bbb38f588260","Type":"ContainerStarted","Data":"4abb236d79cc8592182059a25a1bc35aaa2d4ae1b8716c7469a32147843e50a4"} Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.641387 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-53f4-account-create-update-mh2bq" event={"ID":"ca3578cc-7bd4-4e77-8b29-bbb38f588260","Type":"ContainerStarted","Data":"43c082521df80e8aaf2a62b7e035919e319670ae64cf37a87221a05f6dd28385"} Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.678070 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-53f4-account-create-update-mh2bq" podStartSLOduration=2.678055371 podStartE2EDuration="2.678055371s" podCreationTimestamp="2026-02-18 19:53:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:53:06.666351052 +0000 UTC m=+1150.248305897" watchObservedRunningTime="2026-02-18 19:53:06.678055371 +0000 UTC m=+1150.260010216" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.690473 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac9c39c2-bf9e-4f11-b37f-17089fce08e7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.690498 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7988cea-6aa8-4552-8965-04b417c91831-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.690507 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64cqb\" (UniqueName: \"kubernetes.io/projected/ac9c39c2-bf9e-4f11-b37f-17089fce08e7-kube-api-access-64cqb\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:06.690518 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7khwn\" (UniqueName: \"kubernetes.io/projected/f7988cea-6aa8-4552-8965-04b417c91831-kube-api-access-7khwn\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.530973 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.531243 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerName="config-reloader" containerID="cri-o://66d84470994100b42a53acf4561ffbafa4e810bfb2c143ce053c40ae82620693" gracePeriod=600 Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.531615 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerName="prometheus" containerID="cri-o://df6328f727f0438e57305c9925166837d62f5032c8dd58e3ace63bdd0cdad46f" gracePeriod=600 Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.531665 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerName="thanos-sidecar" containerID="cri-o://1898dd90c7cb5f44526cee3dcba285d60ab2aa3db3c6ae91c6ffaee8a1e5c768" gracePeriod=600 Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.657653 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5","Type":"ContainerStarted","Data":"cfdeeebc285591118a23dbf8cae9a08259e2f51ba2d3126ba0de8a1ab322026f"} Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.657882 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c0259d8c-5cfe-48a2-9a7a-c15341cb2ab5","Type":"ContainerStarted","Data":"fdd0e6806eeeccec58d225097203f7c9f01ff95648d4c9810b6d49398427d4ec"} Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.661003 4932 generic.go:334] "Generic (PLEG): container finished" podID="ca3578cc-7bd4-4e77-8b29-bbb38f588260" containerID="4abb236d79cc8592182059a25a1bc35aaa2d4ae1b8716c7469a32147843e50a4" exitCode=0 Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.661400 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-53f4-account-create-update-mh2bq" event={"ID":"ca3578cc-7bd4-4e77-8b29-bbb38f588260","Type":"ContainerDied","Data":"4abb236d79cc8592182059a25a1bc35aaa2d4ae1b8716c7469a32147843e50a4"} Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.664585 4932 generic.go:334] "Generic (PLEG): container finished" podID="0b9deee6-7804-492e-88c9-147087152416" containerID="2a62cc7c92b0f61fc993f04377e1428679cd22afc955b4da72b0e6e2d00eb682" exitCode=0 Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.664654 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-hbs76" event={"ID":"0b9deee6-7804-492e-88c9-147087152416","Type":"ContainerDied","Data":"2a62cc7c92b0f61fc993f04377e1428679cd22afc955b4da72b0e6e2d00eb682"} Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.715165 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=45.081934784 podStartE2EDuration="50.715145728s" podCreationTimestamp="2026-02-18 19:52:17 +0000 UTC" firstStartedPulling="2026-02-18 19:52:59.146410039 +0000 UTC m=+1142.728364874" lastFinishedPulling="2026-02-18 19:53:04.779620973 +0000 UTC m=+1148.361575818" observedRunningTime="2026-02-18 19:53:07.706896115 +0000 UTC m=+1151.288850960" watchObservedRunningTime="2026-02-18 19:53:07.715145728 +0000 UTC m=+1151.297100573" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.959695 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8465d7b6c9-sv9w5"] Feb 18 19:53:07 crc kubenswrapper[4932]: E0218 19:53:07.960307 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7680bf6b-efd6-452a-8900-09cf55b203ff" containerName="mariadb-account-create-update" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.960381 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="7680bf6b-efd6-452a-8900-09cf55b203ff" containerName="mariadb-account-create-update" Feb 18 19:53:07 crc kubenswrapper[4932]: E0218 19:53:07.960457 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac9c39c2-bf9e-4f11-b37f-17089fce08e7" containerName="mariadb-account-create-update" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.960506 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac9c39c2-bf9e-4f11-b37f-17089fce08e7" containerName="mariadb-account-create-update" Feb 18 19:53:07 crc kubenswrapper[4932]: E0218 19:53:07.960569 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56734660-55cc-463c-89f2-131bc9109dab" containerName="mariadb-database-create" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.960613 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="56734660-55cc-463c-89f2-131bc9109dab" containerName="mariadb-database-create" Feb 18 19:53:07 crc kubenswrapper[4932]: E0218 19:53:07.960791 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7988cea-6aa8-4552-8965-04b417c91831" containerName="mariadb-database-create" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.960842 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7988cea-6aa8-4552-8965-04b417c91831" containerName="mariadb-database-create" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.961076 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac9c39c2-bf9e-4f11-b37f-17089fce08e7" containerName="mariadb-account-create-update" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.961224 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7988cea-6aa8-4552-8965-04b417c91831" containerName="mariadb-database-create" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.961314 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="7680bf6b-efd6-452a-8900-09cf55b203ff" containerName="mariadb-account-create-update" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.974975 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="56734660-55cc-463c-89f2-131bc9109dab" containerName="mariadb-database-create" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.976424 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8465d7b6c9-sv9w5"] Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.976572 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:07 crc kubenswrapper[4932]: I0218 19:53:07.979224 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.125528 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-ovsdbserver-nb\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.125835 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-ovsdbserver-sb\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.125962 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-dns-svc\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.126094 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-config\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.126195 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-dns-swift-storage-0\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.126318 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pljhh\" (UniqueName: \"kubernetes.io/projected/1f7bde87-22e2-49c2-a025-ab8f835dff78-kube-api-access-pljhh\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.227841 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-dns-svc\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.228101 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-config\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.228788 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-dns-svc\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.229021 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-config\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.229156 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-dns-swift-storage-0\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.229795 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-dns-swift-storage-0\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.230001 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pljhh\" (UniqueName: \"kubernetes.io/projected/1f7bde87-22e2-49c2-a025-ab8f835dff78-kube-api-access-pljhh\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.230413 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-ovsdbserver-nb\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.231083 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-ovsdbserver-sb\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.231026 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-ovsdbserver-nb\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.231671 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-ovsdbserver-sb\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.264105 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pljhh\" (UniqueName: \"kubernetes.io/projected/1f7bde87-22e2-49c2-a025-ab8f835dff78-kube-api-access-pljhh\") pod \"dnsmasq-dns-8465d7b6c9-sv9w5\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.328553 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.686253 4932 generic.go:334] "Generic (PLEG): container finished" podID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerID="df6328f727f0438e57305c9925166837d62f5032c8dd58e3ace63bdd0cdad46f" exitCode=0 Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.686304 4932 generic.go:334] "Generic (PLEG): container finished" podID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerID="1898dd90c7cb5f44526cee3dcba285d60ab2aa3db3c6ae91c6ffaee8a1e5c768" exitCode=0 Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.686312 4932 generic.go:334] "Generic (PLEG): container finished" podID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerID="66d84470994100b42a53acf4561ffbafa4e810bfb2c143ce053c40ae82620693" exitCode=0 Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.686299 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cf98dd42-289f-43fa-b4dc-c6ff814a3c25","Type":"ContainerDied","Data":"df6328f727f0438e57305c9925166837d62f5032c8dd58e3ace63bdd0cdad46f"} Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.686362 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cf98dd42-289f-43fa-b4dc-c6ff814a3c25","Type":"ContainerDied","Data":"1898dd90c7cb5f44526cee3dcba285d60ab2aa3db3c6ae91c6ffaee8a1e5c768"} Feb 18 19:53:08 crc kubenswrapper[4932]: I0218 19:53:08.686380 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cf98dd42-289f-43fa-b4dc-c6ff814a3c25","Type":"ContainerDied","Data":"66d84470994100b42a53acf4561ffbafa4e810bfb2c143ce053c40ae82620693"} Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.475661 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.490574 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-53f4-account-create-update-mh2bq" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.495336 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-hbs76" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.495773 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.112:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.600495 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") pod \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.600542 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-config-out\") pod \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.600615 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-2\") pod \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.600636 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnvgq\" (UniqueName: \"kubernetes.io/projected/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-kube-api-access-cnvgq\") pod \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.600652 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b9deee6-7804-492e-88c9-147087152416-operator-scripts\") pod \"0b9deee6-7804-492e-88c9-147087152416\" (UID: \"0b9deee6-7804-492e-88c9-147087152416\") " Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.600685 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-thanos-prometheus-http-client-file\") pod \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.600738 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-1\") pod \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.600759 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-web-config\") pod \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.600774 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-0\") pod \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.600829 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca3578cc-7bd4-4e77-8b29-bbb38f588260-operator-scripts\") pod \"ca3578cc-7bd4-4e77-8b29-bbb38f588260\" (UID: \"ca3578cc-7bd4-4e77-8b29-bbb38f588260\") " Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.600861 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-tls-assets\") pod \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.600876 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhrgc\" (UniqueName: \"kubernetes.io/projected/ca3578cc-7bd4-4e77-8b29-bbb38f588260-kube-api-access-xhrgc\") pod \"ca3578cc-7bd4-4e77-8b29-bbb38f588260\" (UID: \"ca3578cc-7bd4-4e77-8b29-bbb38f588260\") " Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.600906 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-config\") pod \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\" (UID: \"cf98dd42-289f-43fa-b4dc-c6ff814a3c25\") " Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.600928 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvhxw\" (UniqueName: \"kubernetes.io/projected/0b9deee6-7804-492e-88c9-147087152416-kube-api-access-nvhxw\") pod \"0b9deee6-7804-492e-88c9-147087152416\" (UID: \"0b9deee6-7804-492e-88c9-147087152416\") " Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.601687 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "cf98dd42-289f-43fa-b4dc-c6ff814a3c25" (UID: "cf98dd42-289f-43fa-b4dc-c6ff814a3c25"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.602280 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "cf98dd42-289f-43fa-b4dc-c6ff814a3c25" (UID: "cf98dd42-289f-43fa-b4dc-c6ff814a3c25"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.602412 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b9deee6-7804-492e-88c9-147087152416-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0b9deee6-7804-492e-88c9-147087152416" (UID: "0b9deee6-7804-492e-88c9-147087152416"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.602562 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "cf98dd42-289f-43fa-b4dc-c6ff814a3c25" (UID: "cf98dd42-289f-43fa-b4dc-c6ff814a3c25"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.603434 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca3578cc-7bd4-4e77-8b29-bbb38f588260-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ca3578cc-7bd4-4e77-8b29-bbb38f588260" (UID: "ca3578cc-7bd4-4e77-8b29-bbb38f588260"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.611754 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b9deee6-7804-492e-88c9-147087152416-kube-api-access-nvhxw" (OuterVolumeSpecName: "kube-api-access-nvhxw") pod "0b9deee6-7804-492e-88c9-147087152416" (UID: "0b9deee6-7804-492e-88c9-147087152416"). InnerVolumeSpecName "kube-api-access-nvhxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.620556 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca3578cc-7bd4-4e77-8b29-bbb38f588260-kube-api-access-xhrgc" (OuterVolumeSpecName: "kube-api-access-xhrgc") pod "ca3578cc-7bd4-4e77-8b29-bbb38f588260" (UID: "ca3578cc-7bd4-4e77-8b29-bbb38f588260"). InnerVolumeSpecName "kube-api-access-xhrgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.627377 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-config" (OuterVolumeSpecName: "config") pod "cf98dd42-289f-43fa-b4dc-c6ff814a3c25" (UID: "cf98dd42-289f-43fa-b4dc-c6ff814a3c25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.627465 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-config-out" (OuterVolumeSpecName: "config-out") pod "cf98dd42-289f-43fa-b4dc-c6ff814a3c25" (UID: "cf98dd42-289f-43fa-b4dc-c6ff814a3c25"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.628037 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "cf98dd42-289f-43fa-b4dc-c6ff814a3c25" (UID: "cf98dd42-289f-43fa-b4dc-c6ff814a3c25"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.632118 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "cf98dd42-289f-43fa-b4dc-c6ff814a3c25" (UID: "cf98dd42-289f-43fa-b4dc-c6ff814a3c25"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.647828 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-kube-api-access-cnvgq" (OuterVolumeSpecName: "kube-api-access-cnvgq") pod "cf98dd42-289f-43fa-b4dc-c6ff814a3c25" (UID: "cf98dd42-289f-43fa-b4dc-c6ff814a3c25"). InnerVolumeSpecName "kube-api-access-cnvgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.648600 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "cf98dd42-289f-43fa-b4dc-c6ff814a3c25" (UID: "cf98dd42-289f-43fa-b4dc-c6ff814a3c25"). InnerVolumeSpecName "pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.651315 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-web-config" (OuterVolumeSpecName: "web-config") pod "cf98dd42-289f-43fa-b4dc-c6ff814a3c25" (UID: "cf98dd42-289f-43fa-b4dc-c6ff814a3c25"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.702491 4932 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.702545 4932 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-web-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.702565 4932 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.702584 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca3578cc-7bd4-4e77-8b29-bbb38f588260-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.702599 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhrgc\" (UniqueName: \"kubernetes.io/projected/ca3578cc-7bd4-4e77-8b29-bbb38f588260-kube-api-access-xhrgc\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.702611 4932 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.702623 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.702638 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvhxw\" (UniqueName: \"kubernetes.io/projected/0b9deee6-7804-492e-88c9-147087152416-kube-api-access-nvhxw\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.702695 4932 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") on node \"crc\" " Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.702717 4932 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-config-out\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.702734 4932 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.702752 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnvgq\" (UniqueName: \"kubernetes.io/projected/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-kube-api-access-cnvgq\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.702769 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b9deee6-7804-492e-88c9-147087152416-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.702785 4932 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cf98dd42-289f-43fa-b4dc-c6ff814a3c25-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.716906 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-53f4-account-create-update-mh2bq" event={"ID":"ca3578cc-7bd4-4e77-8b29-bbb38f588260","Type":"ContainerDied","Data":"43c082521df80e8aaf2a62b7e035919e319670ae64cf37a87221a05f6dd28385"} Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.716962 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43c082521df80e8aaf2a62b7e035919e319670ae64cf37a87221a05f6dd28385" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.717039 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-53f4-account-create-update-mh2bq" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.719028 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-hbs76" event={"ID":"0b9deee6-7804-492e-88c9-147087152416","Type":"ContainerDied","Data":"1ad029f86ec5f0d0a06dcaedd3663dce48eadadc46ce509227278c4f119388a2"} Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.719055 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ad029f86ec5f0d0a06dcaedd3663dce48eadadc46ce509227278c4f119388a2" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.719083 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-hbs76" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.720282 4932 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.720422 4932 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69") on node "crc" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.721283 4932 generic.go:334] "Generic (PLEG): container finished" podID="1bbf2873-6ca9-4569-b5b6-3003511c02ba" containerID="104353923ef97f2e6933dbfcfbfc2a9125473f1373667e2eb5163afb4316da88" exitCode=0 Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.721357 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rl7xx" event={"ID":"1bbf2873-6ca9-4569-b5b6-3003511c02ba","Type":"ContainerDied","Data":"104353923ef97f2e6933dbfcfbfc2a9125473f1373667e2eb5163afb4316da88"} Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.724916 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cf98dd42-289f-43fa-b4dc-c6ff814a3c25","Type":"ContainerDied","Data":"c079ef0a75a184583fc3bcc63484ddbcd7e9466dbb03675318140b785c3f7c07"} Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.724952 4932 scope.go:117] "RemoveContainer" containerID="df6328f727f0438e57305c9925166837d62f5032c8dd58e3ace63bdd0cdad46f" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.725003 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.780962 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.799312 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.804577 4932 reconciler_common.go:293] "Volume detached for volume \"pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.806110 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:53:11 crc kubenswrapper[4932]: E0218 19:53:11.806514 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerName="init-config-reloader" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.806528 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerName="init-config-reloader" Feb 18 19:53:11 crc kubenswrapper[4932]: E0218 19:53:11.806545 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerName="config-reloader" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.806553 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerName="config-reloader" Feb 18 19:53:11 crc kubenswrapper[4932]: E0218 19:53:11.806570 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerName="prometheus" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.806578 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerName="prometheus" Feb 18 19:53:11 crc kubenswrapper[4932]: E0218 19:53:11.806598 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerName="thanos-sidecar" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.806605 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerName="thanos-sidecar" Feb 18 19:53:11 crc kubenswrapper[4932]: E0218 19:53:11.806621 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b9deee6-7804-492e-88c9-147087152416" containerName="mariadb-database-create" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.806628 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b9deee6-7804-492e-88c9-147087152416" containerName="mariadb-database-create" Feb 18 19:53:11 crc kubenswrapper[4932]: E0218 19:53:11.806650 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca3578cc-7bd4-4e77-8b29-bbb38f588260" containerName="mariadb-account-create-update" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.806659 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca3578cc-7bd4-4e77-8b29-bbb38f588260" containerName="mariadb-account-create-update" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.806848 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerName="config-reloader" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.806895 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca3578cc-7bd4-4e77-8b29-bbb38f588260" containerName="mariadb-account-create-update" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.806919 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerName="prometheus" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.806936 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" containerName="thanos-sidecar" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.806956 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b9deee6-7804-492e-88c9-147087152416" containerName="mariadb-database-create" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.808806 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.811147 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-5jcnf" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.811502 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.811641 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.812784 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.812942 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.813126 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.814024 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.814090 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.831385 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.834011 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.906080 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f1783f11-a79f-49d9-a637-224863cdb0ad-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.906336 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.906358 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.906379 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-config\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.906407 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.906438 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.906484 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.906505 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnmwk\" (UniqueName: \"kubernetes.io/projected/f1783f11-a79f-49d9-a637-224863cdb0ad-kube-api-access-fnmwk\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.906522 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.906554 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f1783f11-a79f-49d9-a637-224863cdb0ad-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.906587 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.906607 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:11 crc kubenswrapper[4932]: I0218 19:53:11.906650 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.008187 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.008259 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.008283 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnmwk\" (UniqueName: \"kubernetes.io/projected/f1783f11-a79f-49d9-a637-224863cdb0ad-kube-api-access-fnmwk\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.008298 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.008332 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f1783f11-a79f-49d9-a637-224863cdb0ad-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.008357 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.008374 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.008409 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.008448 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f1783f11-a79f-49d9-a637-224863cdb0ad-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.008479 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.008506 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.008547 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-config\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.008606 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.012990 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.013037 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.014554 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.014709 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f1783f11-a79f-49d9-a637-224863cdb0ad-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.015344 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.015351 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.015746 4932 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.015793 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-config\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.015793 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e039419306e79ade7652e80c67474011a5658585fd3b39d0b236ffa94ab5d0db/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.017779 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f1783f11-a79f-49d9-a637-224863cdb0ad-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.019636 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.023891 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.027261 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.032393 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnmwk\" (UniqueName: \"kubernetes.io/projected/f1783f11-a79f-49d9-a637-224863cdb0ad-kube-api-access-fnmwk\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.071455 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") pod \"prometheus-metric-storage-0\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:12 crc kubenswrapper[4932]: I0218 19:53:12.140866 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:13 crc kubenswrapper[4932]: I0218 19:53:13.189903 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf98dd42-289f-43fa-b4dc-c6ff814a3c25" path="/var/lib/kubelet/pods/cf98dd42-289f-43fa-b4dc-c6ff814a3c25/volumes" Feb 18 19:53:15 crc kubenswrapper[4932]: I0218 19:53:15.999008 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rl7xx" Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.080793 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-combined-ca-bundle\") pod \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\" (UID: \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\") " Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.081006 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlv4k\" (UniqueName: \"kubernetes.io/projected/1bbf2873-6ca9-4569-b5b6-3003511c02ba-kube-api-access-qlv4k\") pod \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\" (UID: \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\") " Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.081065 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-config-data\") pod \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\" (UID: \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\") " Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.081137 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-db-sync-config-data\") pod \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\" (UID: \"1bbf2873-6ca9-4569-b5b6-3003511c02ba\") " Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.089363 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1bbf2873-6ca9-4569-b5b6-3003511c02ba" (UID: "1bbf2873-6ca9-4569-b5b6-3003511c02ba"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.090350 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bbf2873-6ca9-4569-b5b6-3003511c02ba-kube-api-access-qlv4k" (OuterVolumeSpecName: "kube-api-access-qlv4k") pod "1bbf2873-6ca9-4569-b5b6-3003511c02ba" (UID: "1bbf2873-6ca9-4569-b5b6-3003511c02ba"). InnerVolumeSpecName "kube-api-access-qlv4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.108058 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1bbf2873-6ca9-4569-b5b6-3003511c02ba" (UID: "1bbf2873-6ca9-4569-b5b6-3003511c02ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.134983 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-config-data" (OuterVolumeSpecName: "config-data") pod "1bbf2873-6ca9-4569-b5b6-3003511c02ba" (UID: "1bbf2873-6ca9-4569-b5b6-3003511c02ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.184003 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.184035 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlv4k\" (UniqueName: \"kubernetes.io/projected/1bbf2873-6ca9-4569-b5b6-3003511c02ba-kube-api-access-qlv4k\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.184046 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.184055 4932 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1bbf2873-6ca9-4569-b5b6-3003511c02ba-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.480889 4932 scope.go:117] "RemoveContainer" containerID="1898dd90c7cb5f44526cee3dcba285d60ab2aa3db3c6ae91c6ffaee8a1e5c768" Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.570700 4932 scope.go:117] "RemoveContainer" containerID="66d84470994100b42a53acf4561ffbafa4e810bfb2c143ce053c40ae82620693" Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.594755 4932 scope.go:117] "RemoveContainer" containerID="e180b06fd671083f79001b0061a303617d1566914909b796e6dc37109bc742cf" Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.774857 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-h526s" event={"ID":"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a","Type":"ContainerStarted","Data":"2dcf1d051e29c868ab7c7db13dbafa7710ab23c52dd39329f8dbfbb2b5ea9459"} Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.784576 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rl7xx" event={"ID":"1bbf2873-6ca9-4569-b5b6-3003511c02ba","Type":"ContainerDied","Data":"f068e85210ddcd828af2d489d54882cc64dba8b583684c4c6f7597bf8f804826"} Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.784612 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rl7xx" Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.784621 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f068e85210ddcd828af2d489d54882cc64dba8b583684c4c6f7597bf8f804826" Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.802966 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-h526s" podStartSLOduration=1.4788571529999999 podStartE2EDuration="14.802936208s" podCreationTimestamp="2026-02-18 19:53:02 +0000 UTC" firstStartedPulling="2026-02-18 19:53:03.19732856 +0000 UTC m=+1146.779283405" lastFinishedPulling="2026-02-18 19:53:16.521407605 +0000 UTC m=+1160.103362460" observedRunningTime="2026-02-18 19:53:16.790519182 +0000 UTC m=+1160.372474027" watchObservedRunningTime="2026-02-18 19:53:16.802936208 +0000 UTC m=+1160.384891083" Feb 18 19:53:16 crc kubenswrapper[4932]: I0218 19:53:16.903321 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8465d7b6c9-sv9w5"] Feb 18 19:53:16 crc kubenswrapper[4932]: W0218 19:53:16.914604 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f7bde87_22e2_49c2_a025_ab8f835dff78.slice/crio-54e67ba1a029ebbf1ee8283379add136ec5a0898711c3c99d70bda3b8789460d WatchSource:0}: Error finding container 54e67ba1a029ebbf1ee8283379add136ec5a0898711c3c99d70bda3b8789460d: Status 404 returned error can't find the container with id 54e67ba1a029ebbf1ee8283379add136ec5a0898711c3c99d70bda3b8789460d Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.044232 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.465092 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8465d7b6c9-sv9w5"] Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.501917 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8f475786f-6jkn9"] Feb 18 19:53:17 crc kubenswrapper[4932]: E0218 19:53:17.502289 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bbf2873-6ca9-4569-b5b6-3003511c02ba" containerName="glance-db-sync" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.502304 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bbf2873-6ca9-4569-b5b6-3003511c02ba" containerName="glance-db-sync" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.502479 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bbf2873-6ca9-4569-b5b6-3003511c02ba" containerName="glance-db-sync" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.503278 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.530517 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8f475786f-6jkn9"] Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.620223 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-ovsdbserver-nb\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.620292 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-dns-swift-storage-0\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.620399 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-config\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.620423 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-dns-svc\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.620495 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-ovsdbserver-sb\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.620618 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kb4v\" (UniqueName: \"kubernetes.io/projected/81c5b019-830a-45a5-b05e-22f7aa7e41c7-kube-api-access-6kb4v\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.721602 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-ovsdbserver-nb\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.721654 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-dns-swift-storage-0\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.721689 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-config\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.721706 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-dns-svc\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.721738 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-ovsdbserver-sb\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.721796 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kb4v\" (UniqueName: \"kubernetes.io/projected/81c5b019-830a-45a5-b05e-22f7aa7e41c7-kube-api-access-6kb4v\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.722866 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-ovsdbserver-nb\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.722894 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-dns-swift-storage-0\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.723439 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-config\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.723730 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-dns-svc\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.723960 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-ovsdbserver-sb\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.742906 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kb4v\" (UniqueName: \"kubernetes.io/projected/81c5b019-830a-45a5-b05e-22f7aa7e41c7-kube-api-access-6kb4v\") pod \"dnsmasq-dns-8f475786f-6jkn9\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.798947 4932 generic.go:334] "Generic (PLEG): container finished" podID="1f7bde87-22e2-49c2-a025-ab8f835dff78" containerID="8f1f6c9d991aa15d42b648ceda3a3c5f26b30bbb576f481c856678b1bf62a3ab" exitCode=0 Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.799044 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" event={"ID":"1f7bde87-22e2-49c2-a025-ab8f835dff78","Type":"ContainerDied","Data":"8f1f6c9d991aa15d42b648ceda3a3c5f26b30bbb576f481c856678b1bf62a3ab"} Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.800066 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" event={"ID":"1f7bde87-22e2-49c2-a025-ab8f835dff78","Type":"ContainerStarted","Data":"54e67ba1a029ebbf1ee8283379add136ec5a0898711c3c99d70bda3b8789460d"} Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.813838 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f1783f11-a79f-49d9-a637-224863cdb0ad","Type":"ContainerStarted","Data":"517321ee2b5c108f37907af390aff2f58338e81a6d4f29d0b1fb1230f8840a63"} Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.817734 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.820216 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-4ghxf" event={"ID":"bc05154b-7f25-4fb1-8293-9aba06523c37","Type":"ContainerStarted","Data":"e723a55a533327bae796eda64399cc0b1ee1750e65068515a7e5625e2f091ec4"} Feb 18 19:53:17 crc kubenswrapper[4932]: I0218 19:53:17.846058 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-4ghxf" podStartSLOduration=3.444492023 podStartE2EDuration="13.846037493s" podCreationTimestamp="2026-02-18 19:53:04 +0000 UTC" firstStartedPulling="2026-02-18 19:53:06.193275093 +0000 UTC m=+1149.775229938" lastFinishedPulling="2026-02-18 19:53:16.594820563 +0000 UTC m=+1160.176775408" observedRunningTime="2026-02-18 19:53:17.843263865 +0000 UTC m=+1161.425218730" watchObservedRunningTime="2026-02-18 19:53:17.846037493 +0000 UTC m=+1161.427992338" Feb 18 19:53:18 crc kubenswrapper[4932]: E0218 19:53:18.139112 4932 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 18 19:53:18 crc kubenswrapper[4932]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/1f7bde87-22e2-49c2-a025-ab8f835dff78/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 18 19:53:18 crc kubenswrapper[4932]: > podSandboxID="54e67ba1a029ebbf1ee8283379add136ec5a0898711c3c99d70bda3b8789460d" Feb 18 19:53:18 crc kubenswrapper[4932]: E0218 19:53:18.139556 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 18 19:53:18 crc kubenswrapper[4932]: container &Container{Name:dnsmasq-dns,Image:38.102.83.58:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ncbh5fh58h56bh64bh684h98h654h64bh687h645h54h548h87h59dh56dh655hd9hbfh87h5c9h68bh645h64h8bh8bh585h5bdh55ch546hfbhbdq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-swift-storage-0,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-swift-storage-0,SubPath:dns-swift-storage-0,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pljhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-8465d7b6c9-sv9w5_openstack(1f7bde87-22e2-49c2-a025-ab8f835dff78): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/1f7bde87-22e2-49c2-a025-ab8f835dff78/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 18 19:53:18 crc kubenswrapper[4932]: > logger="UnhandledError" Feb 18 19:53:18 crc kubenswrapper[4932]: E0218 19:53:18.141017 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/1f7bde87-22e2-49c2-a025-ab8f835dff78/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" podUID="1f7bde87-22e2-49c2-a025-ab8f835dff78" Feb 18 19:53:18 crc kubenswrapper[4932]: I0218 19:53:18.343956 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8f475786f-6jkn9"] Feb 18 19:53:18 crc kubenswrapper[4932]: W0218 19:53:18.350440 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81c5b019_830a_45a5_b05e_22f7aa7e41c7.slice/crio-40635d0f4580a3bc434b2fd370ad4b541db323a92086fd21f44bf21127a2ea88 WatchSource:0}: Error finding container 40635d0f4580a3bc434b2fd370ad4b541db323a92086fd21f44bf21127a2ea88: Status 404 returned error can't find the container with id 40635d0f4580a3bc434b2fd370ad4b541db323a92086fd21f44bf21127a2ea88 Feb 18 19:53:18 crc kubenswrapper[4932]: I0218 19:53:18.829726 4932 generic.go:334] "Generic (PLEG): container finished" podID="81c5b019-830a-45a5-b05e-22f7aa7e41c7" containerID="dd9cde3170c69a353ec61b65c4f18cbd1534c0a768ecc45c08a9e745323cd132" exitCode=0 Feb 18 19:53:18 crc kubenswrapper[4932]: I0218 19:53:18.829799 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8f475786f-6jkn9" event={"ID":"81c5b019-830a-45a5-b05e-22f7aa7e41c7","Type":"ContainerDied","Data":"dd9cde3170c69a353ec61b65c4f18cbd1534c0a768ecc45c08a9e745323cd132"} Feb 18 19:53:18 crc kubenswrapper[4932]: I0218 19:53:18.830037 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8f475786f-6jkn9" event={"ID":"81c5b019-830a-45a5-b05e-22f7aa7e41c7","Type":"ContainerStarted","Data":"40635d0f4580a3bc434b2fd370ad4b541db323a92086fd21f44bf21127a2ea88"} Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.440919 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.551070 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-dns-svc\") pod \"1f7bde87-22e2-49c2-a025-ab8f835dff78\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.551151 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-dns-swift-storage-0\") pod \"1f7bde87-22e2-49c2-a025-ab8f835dff78\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.551253 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-ovsdbserver-sb\") pod \"1f7bde87-22e2-49c2-a025-ab8f835dff78\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.551319 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-ovsdbserver-nb\") pod \"1f7bde87-22e2-49c2-a025-ab8f835dff78\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.551351 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-config\") pod \"1f7bde87-22e2-49c2-a025-ab8f835dff78\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.551761 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pljhh\" (UniqueName: \"kubernetes.io/projected/1f7bde87-22e2-49c2-a025-ab8f835dff78-kube-api-access-pljhh\") pod \"1f7bde87-22e2-49c2-a025-ab8f835dff78\" (UID: \"1f7bde87-22e2-49c2-a025-ab8f835dff78\") " Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.558425 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f7bde87-22e2-49c2-a025-ab8f835dff78-kube-api-access-pljhh" (OuterVolumeSpecName: "kube-api-access-pljhh") pod "1f7bde87-22e2-49c2-a025-ab8f835dff78" (UID: "1f7bde87-22e2-49c2-a025-ab8f835dff78"). InnerVolumeSpecName "kube-api-access-pljhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.591848 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1f7bde87-22e2-49c2-a025-ab8f835dff78" (UID: "1f7bde87-22e2-49c2-a025-ab8f835dff78"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.597012 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1f7bde87-22e2-49c2-a025-ab8f835dff78" (UID: "1f7bde87-22e2-49c2-a025-ab8f835dff78"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.607724 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1f7bde87-22e2-49c2-a025-ab8f835dff78" (UID: "1f7bde87-22e2-49c2-a025-ab8f835dff78"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.610033 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1f7bde87-22e2-49c2-a025-ab8f835dff78" (UID: "1f7bde87-22e2-49c2-a025-ab8f835dff78"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.617436 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-config" (OuterVolumeSpecName: "config") pod "1f7bde87-22e2-49c2-a025-ab8f835dff78" (UID: "1f7bde87-22e2-49c2-a025-ab8f835dff78"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.653867 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.653901 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pljhh\" (UniqueName: \"kubernetes.io/projected/1f7bde87-22e2-49c2-a025-ab8f835dff78-kube-api-access-pljhh\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.653912 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.653923 4932 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.653932 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.653940 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f7bde87-22e2-49c2-a025-ab8f835dff78-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.840034 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f1783f11-a79f-49d9-a637-224863cdb0ad","Type":"ContainerStarted","Data":"81f9d76b429826048a1f76e9841d9bd5c8224e1c54ca1834ee1d11eed8e3afa6"} Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.842879 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8f475786f-6jkn9" event={"ID":"81c5b019-830a-45a5-b05e-22f7aa7e41c7","Type":"ContainerStarted","Data":"ada62a006618a52de9cbdc7e1191675216899513cfdb323ef65c636601133e63"} Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.843438 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.844794 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" event={"ID":"1f7bde87-22e2-49c2-a025-ab8f835dff78","Type":"ContainerDied","Data":"54e67ba1a029ebbf1ee8283379add136ec5a0898711c3c99d70bda3b8789460d"} Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.844843 4932 scope.go:117] "RemoveContainer" containerID="8f1f6c9d991aa15d42b648ceda3a3c5f26b30bbb576f481c856678b1bf62a3ab" Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.844848 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8465d7b6c9-sv9w5" Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.940528 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8465d7b6c9-sv9w5"] Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.950402 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8465d7b6c9-sv9w5"] Feb 18 19:53:19 crc kubenswrapper[4932]: I0218 19:53:19.956446 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8f475786f-6jkn9" podStartSLOduration=2.95642951 podStartE2EDuration="2.95642951s" podCreationTimestamp="2026-02-18 19:53:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:53:19.93978339 +0000 UTC m=+1163.521738235" watchObservedRunningTime="2026-02-18 19:53:19.95642951 +0000 UTC m=+1163.538384355" Feb 18 19:53:21 crc kubenswrapper[4932]: I0218 19:53:21.188373 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f7bde87-22e2-49c2-a025-ab8f835dff78" path="/var/lib/kubelet/pods/1f7bde87-22e2-49c2-a025-ab8f835dff78/volumes" Feb 18 19:53:22 crc kubenswrapper[4932]: I0218 19:53:22.874995 4932 generic.go:334] "Generic (PLEG): container finished" podID="bc05154b-7f25-4fb1-8293-9aba06523c37" containerID="e723a55a533327bae796eda64399cc0b1ee1750e65068515a7e5625e2f091ec4" exitCode=0 Feb 18 19:53:22 crc kubenswrapper[4932]: I0218 19:53:22.875084 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-4ghxf" event={"ID":"bc05154b-7f25-4fb1-8293-9aba06523c37","Type":"ContainerDied","Data":"e723a55a533327bae796eda64399cc0b1ee1750e65068515a7e5625e2f091ec4"} Feb 18 19:53:23 crc kubenswrapper[4932]: I0218 19:53:23.887398 4932 generic.go:334] "Generic (PLEG): container finished" podID="14c3aa11-529c-423d-bb7d-30fd0d5a3e7a" containerID="2dcf1d051e29c868ab7c7db13dbafa7710ab23c52dd39329f8dbfbb2b5ea9459" exitCode=0 Feb 18 19:53:23 crc kubenswrapper[4932]: I0218 19:53:23.887474 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-h526s" event={"ID":"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a","Type":"ContainerDied","Data":"2dcf1d051e29c868ab7c7db13dbafa7710ab23c52dd39329f8dbfbb2b5ea9459"} Feb 18 19:53:24 crc kubenswrapper[4932]: I0218 19:53:24.281542 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-4ghxf" Feb 18 19:53:24 crc kubenswrapper[4932]: I0218 19:53:24.339255 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-db-sync-config-data\") pod \"bc05154b-7f25-4fb1-8293-9aba06523c37\" (UID: \"bc05154b-7f25-4fb1-8293-9aba06523c37\") " Feb 18 19:53:24 crc kubenswrapper[4932]: I0218 19:53:24.339316 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-config-data\") pod \"bc05154b-7f25-4fb1-8293-9aba06523c37\" (UID: \"bc05154b-7f25-4fb1-8293-9aba06523c37\") " Feb 18 19:53:24 crc kubenswrapper[4932]: I0218 19:53:24.339449 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-combined-ca-bundle\") pod \"bc05154b-7f25-4fb1-8293-9aba06523c37\" (UID: \"bc05154b-7f25-4fb1-8293-9aba06523c37\") " Feb 18 19:53:24 crc kubenswrapper[4932]: I0218 19:53:24.339476 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2htl\" (UniqueName: \"kubernetes.io/projected/bc05154b-7f25-4fb1-8293-9aba06523c37-kube-api-access-s2htl\") pod \"bc05154b-7f25-4fb1-8293-9aba06523c37\" (UID: \"bc05154b-7f25-4fb1-8293-9aba06523c37\") " Feb 18 19:53:24 crc kubenswrapper[4932]: I0218 19:53:24.344913 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "bc05154b-7f25-4fb1-8293-9aba06523c37" (UID: "bc05154b-7f25-4fb1-8293-9aba06523c37"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:24 crc kubenswrapper[4932]: I0218 19:53:24.345435 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc05154b-7f25-4fb1-8293-9aba06523c37-kube-api-access-s2htl" (OuterVolumeSpecName: "kube-api-access-s2htl") pod "bc05154b-7f25-4fb1-8293-9aba06523c37" (UID: "bc05154b-7f25-4fb1-8293-9aba06523c37"). InnerVolumeSpecName "kube-api-access-s2htl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:24 crc kubenswrapper[4932]: I0218 19:53:24.364262 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc05154b-7f25-4fb1-8293-9aba06523c37" (UID: "bc05154b-7f25-4fb1-8293-9aba06523c37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:24 crc kubenswrapper[4932]: I0218 19:53:24.386115 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-config-data" (OuterVolumeSpecName: "config-data") pod "bc05154b-7f25-4fb1-8293-9aba06523c37" (UID: "bc05154b-7f25-4fb1-8293-9aba06523c37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:24 crc kubenswrapper[4932]: I0218 19:53:24.440852 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:24 crc kubenswrapper[4932]: I0218 19:53:24.440895 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2htl\" (UniqueName: \"kubernetes.io/projected/bc05154b-7f25-4fb1-8293-9aba06523c37-kube-api-access-s2htl\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:24 crc kubenswrapper[4932]: I0218 19:53:24.440910 4932 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:24 crc kubenswrapper[4932]: I0218 19:53:24.440918 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc05154b-7f25-4fb1-8293-9aba06523c37-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:24 crc kubenswrapper[4932]: I0218 19:53:24.898705 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-4ghxf" event={"ID":"bc05154b-7f25-4fb1-8293-9aba06523c37","Type":"ContainerDied","Data":"15675a36136757370796fc216004ca775eac02c9effd24c85dfe90820b2828ae"} Feb 18 19:53:24 crc kubenswrapper[4932]: I0218 19:53:24.898770 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15675a36136757370796fc216004ca775eac02c9effd24c85dfe90820b2828ae" Feb 18 19:53:24 crc kubenswrapper[4932]: I0218 19:53:24.898728 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-4ghxf" Feb 18 19:53:25 crc kubenswrapper[4932]: I0218 19:53:25.232839 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-h526s" Feb 18 19:53:25 crc kubenswrapper[4932]: I0218 19:53:25.361235 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5sds\" (UniqueName: \"kubernetes.io/projected/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-kube-api-access-r5sds\") pod \"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a\" (UID: \"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a\") " Feb 18 19:53:25 crc kubenswrapper[4932]: I0218 19:53:25.361367 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-config-data\") pod \"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a\" (UID: \"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a\") " Feb 18 19:53:25 crc kubenswrapper[4932]: I0218 19:53:25.361480 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-combined-ca-bundle\") pod \"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a\" (UID: \"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a\") " Feb 18 19:53:25 crc kubenswrapper[4932]: I0218 19:53:25.376024 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-kube-api-access-r5sds" (OuterVolumeSpecName: "kube-api-access-r5sds") pod "14c3aa11-529c-423d-bb7d-30fd0d5a3e7a" (UID: "14c3aa11-529c-423d-bb7d-30fd0d5a3e7a"). InnerVolumeSpecName "kube-api-access-r5sds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:25 crc kubenswrapper[4932]: I0218 19:53:25.388840 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14c3aa11-529c-423d-bb7d-30fd0d5a3e7a" (UID: "14c3aa11-529c-423d-bb7d-30fd0d5a3e7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:25 crc kubenswrapper[4932]: I0218 19:53:25.404430 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-config-data" (OuterVolumeSpecName: "config-data") pod "14c3aa11-529c-423d-bb7d-30fd0d5a3e7a" (UID: "14c3aa11-529c-423d-bb7d-30fd0d5a3e7a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:25 crc kubenswrapper[4932]: I0218 19:53:25.463405 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:25 crc kubenswrapper[4932]: I0218 19:53:25.463435 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5sds\" (UniqueName: \"kubernetes.io/projected/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-kube-api-access-r5sds\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:25 crc kubenswrapper[4932]: I0218 19:53:25.463447 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:25 crc kubenswrapper[4932]: I0218 19:53:25.912158 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-h526s" event={"ID":"14c3aa11-529c-423d-bb7d-30fd0d5a3e7a","Type":"ContainerDied","Data":"43fb9c3d5607cbfcdb2e71bfe2fe586c4b79577442d0345f45f7e3f3cb5eb6e7"} Feb 18 19:53:25 crc kubenswrapper[4932]: I0218 19:53:25.912224 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43fb9c3d5607cbfcdb2e71bfe2fe586c4b79577442d0345f45f7e3f3cb5eb6e7" Feb 18 19:53:25 crc kubenswrapper[4932]: I0218 19:53:25.912278 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-h526s" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.156396 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-pchf4"] Feb 18 19:53:26 crc kubenswrapper[4932]: E0218 19:53:26.156725 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f7bde87-22e2-49c2-a025-ab8f835dff78" containerName="init" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.156743 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f7bde87-22e2-49c2-a025-ab8f835dff78" containerName="init" Feb 18 19:53:26 crc kubenswrapper[4932]: E0218 19:53:26.156781 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c3aa11-529c-423d-bb7d-30fd0d5a3e7a" containerName="keystone-db-sync" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.156790 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c3aa11-529c-423d-bb7d-30fd0d5a3e7a" containerName="keystone-db-sync" Feb 18 19:53:26 crc kubenswrapper[4932]: E0218 19:53:26.156811 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc05154b-7f25-4fb1-8293-9aba06523c37" containerName="watcher-db-sync" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.156818 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc05154b-7f25-4fb1-8293-9aba06523c37" containerName="watcher-db-sync" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.156972 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f7bde87-22e2-49c2-a025-ab8f835dff78" containerName="init" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.156987 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c3aa11-529c-423d-bb7d-30fd0d5a3e7a" containerName="keystone-db-sync" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.156999 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc05154b-7f25-4fb1-8293-9aba06523c37" containerName="watcher-db-sync" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.157501 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.167577 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.167875 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.168047 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.168158 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sk7x7" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.178277 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.190029 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pchf4"] Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.370238 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8f475786f-6jkn9"] Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.370538 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8f475786f-6jkn9" podUID="81c5b019-830a-45a5-b05e-22f7aa7e41c7" containerName="dnsmasq-dns" containerID="cri-o://ada62a006618a52de9cbdc7e1191675216899513cfdb323ef65c636601133e63" gracePeriod=10 Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.374447 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.381064 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-credential-keys\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.381142 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-fernet-keys\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.381206 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-scripts\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.381255 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-combined-ca-bundle\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.381300 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9ghl\" (UniqueName: \"kubernetes.io/projected/c63ad2af-4b3b-4aa5-a300-06aadeef8149-kube-api-access-c9ghl\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.381352 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-config-data\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.385815 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.386867 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.396280 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.396524 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-s5bnj" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.444520 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.445919 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.473258 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.480753 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.485777 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96fe12c6-435c-4ef9-a340-c15cd050d898-logs\") pod \"watcher-decision-engine-0\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.485855 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-credential-keys\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.485884 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv8fx\" (UniqueName: \"kubernetes.io/projected/96fe12c6-435c-4ef9-a340-c15cd050d898-kube-api-access-wv8fx\") pod \"watcher-decision-engine-0\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.485907 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-fernet-keys\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.485939 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-scripts\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.485971 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-combined-ca-bundle\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.486012 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9ghl\" (UniqueName: \"kubernetes.io/projected/c63ad2af-4b3b-4aa5-a300-06aadeef8149-kube-api-access-c9ghl\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.486044 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-config-data\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.486079 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-config-data\") pod \"watcher-decision-engine-0\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.486111 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.486130 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.501716 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-credential-keys\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.502521 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-combined-ca-bundle\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.503321 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-scripts\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.505779 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-config-data\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.508003 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-fernet-keys\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.534752 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9ghl\" (UniqueName: \"kubernetes.io/projected/c63ad2af-4b3b-4aa5-a300-06aadeef8149-kube-api-access-c9ghl\") pod \"keystone-bootstrap-pchf4\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.544190 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6644fc979c-bjpxl"] Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.545556 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.581242 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.589284 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrlj7\" (UniqueName: \"kubernetes.io/projected/efabc52d-6f3c-4442-9b80-09577d6d5ed7-kube-api-access-nrlj7\") pod \"watcher-api-0\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " pod="openstack/watcher-api-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.589337 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-config-data\") pod \"watcher-decision-engine-0\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.589363 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " pod="openstack/watcher-api-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.589381 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.589396 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.589411 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-config-data\") pod \"watcher-api-0\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " pod="openstack/watcher-api-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.589473 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96fe12c6-435c-4ef9-a340-c15cd050d898-logs\") pod \"watcher-decision-engine-0\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.589510 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv8fx\" (UniqueName: \"kubernetes.io/projected/96fe12c6-435c-4ef9-a340-c15cd050d898-kube-api-access-wv8fx\") pod \"watcher-decision-engine-0\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.589530 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " pod="openstack/watcher-api-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.589564 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efabc52d-6f3c-4442-9b80-09577d6d5ed7-logs\") pod \"watcher-api-0\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " pod="openstack/watcher-api-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.606001 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96fe12c6-435c-4ef9-a340-c15cd050d898-logs\") pod \"watcher-decision-engine-0\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.616413 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-config-data\") pod \"watcher-decision-engine-0\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.617811 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.624869 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.632105 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.642502 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.657238 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6644fc979c-bjpxl"] Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.693585 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-ovsdbserver-nb\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.693622 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-dns-swift-storage-0\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.693663 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " pod="openstack/watcher-api-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.693698 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efabc52d-6f3c-4442-9b80-09577d6d5ed7-logs\") pod \"watcher-api-0\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " pod="openstack/watcher-api-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.693721 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-dns-svc\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.693742 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z8bq\" (UniqueName: \"kubernetes.io/projected/9d8f2367-684b-453b-bd7a-4d93e021885c-kube-api-access-8z8bq\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.693769 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrlj7\" (UniqueName: \"kubernetes.io/projected/efabc52d-6f3c-4442-9b80-09577d6d5ed7-kube-api-access-nrlj7\") pod \"watcher-api-0\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " pod="openstack/watcher-api-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.693784 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-config\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.693800 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-ovsdbserver-sb\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.693829 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " pod="openstack/watcher-api-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.693846 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-config-data\") pod \"watcher-api-0\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " pod="openstack/watcher-api-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.694766 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv8fx\" (UniqueName: \"kubernetes.io/projected/96fe12c6-435c-4ef9-a340-c15cd050d898-kube-api-access-wv8fx\") pod \"watcher-decision-engine-0\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.695366 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.695684 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efabc52d-6f3c-4442-9b80-09577d6d5ed7-logs\") pod \"watcher-api-0\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " pod="openstack/watcher-api-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.700658 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " pod="openstack/watcher-api-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.709726 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " pod="openstack/watcher-api-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.717644 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.718367 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-config-data\") pod \"watcher-api-0\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " pod="openstack/watcher-api-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.731097 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.741507 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-67874d8bd5-ff7xc"] Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.743124 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.747953 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.748156 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.748333 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.771304 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-x77d8" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.773887 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-kfzmp"] Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.775356 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kfzmp" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.787549 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.795063 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-dns-svc\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.795105 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z8bq\" (UniqueName: \"kubernetes.io/projected/9d8f2367-684b-453b-bd7a-4d93e021885c-kube-api-access-8z8bq\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.795136 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bd90883-79db-4903-87ab-828b9608f9fa-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"5bd90883-79db-4903-87ab-828b9608f9fa\") " pod="openstack/watcher-applier-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.795165 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-config\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.795272 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-ovsdbserver-sb\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.795329 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bd90883-79db-4903-87ab-828b9608f9fa-config-data\") pod \"watcher-applier-0\" (UID: \"5bd90883-79db-4903-87ab-828b9608f9fa\") " pod="openstack/watcher-applier-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.795349 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bd90883-79db-4903-87ab-828b9608f9fa-logs\") pod \"watcher-applier-0\" (UID: \"5bd90883-79db-4903-87ab-828b9608f9fa\") " pod="openstack/watcher-applier-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.795384 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkk7w\" (UniqueName: \"kubernetes.io/projected/5bd90883-79db-4903-87ab-828b9608f9fa-kube-api-access-jkk7w\") pod \"watcher-applier-0\" (UID: \"5bd90883-79db-4903-87ab-828b9608f9fa\") " pod="openstack/watcher-applier-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.795413 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-dns-swift-storage-0\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.795431 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-ovsdbserver-nb\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.796385 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-ovsdbserver-nb\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.796940 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-dns-svc\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.797718 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-config\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.798792 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrlj7\" (UniqueName: \"kubernetes.io/projected/efabc52d-6f3c-4442-9b80-09577d6d5ed7-kube-api-access-nrlj7\") pod \"watcher-api-0\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " pod="openstack/watcher-api-0" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.806479 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-dns-swift-storage-0\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.814537 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-ovsdbserver-sb\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.817184 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-67874d8bd5-ff7xc"] Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.836409 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.836548 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.836642 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-rp826" Feb 18 19:53:26 crc kubenswrapper[4932]: I0218 19:53:26.866202 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z8bq\" (UniqueName: \"kubernetes.io/projected/9d8f2367-684b-453b-bd7a-4d93e021885c-kube-api-access-8z8bq\") pod \"dnsmasq-dns-6644fc979c-bjpxl\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.885478 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-kfzmp"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.898133 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a620c48b-58fa-487f-8997-e2784ddc497b-scripts\") pod \"horizon-67874d8bd5-ff7xc\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.898197 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a620c48b-58fa-487f-8997-e2784ddc497b-config-data\") pod \"horizon-67874d8bd5-ff7xc\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.898228 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bd90883-79db-4903-87ab-828b9608f9fa-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"5bd90883-79db-4903-87ab-828b9608f9fa\") " pod="openstack/watcher-applier-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.898287 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfb47\" (UniqueName: \"kubernetes.io/projected/a620c48b-58fa-487f-8997-e2784ddc497b-kube-api-access-lfb47\") pod \"horizon-67874d8bd5-ff7xc\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.898320 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a620c48b-58fa-487f-8997-e2784ddc497b-horizon-secret-key\") pod \"horizon-67874d8bd5-ff7xc\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.898338 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a620c48b-58fa-487f-8997-e2784ddc497b-logs\") pod \"horizon-67874d8bd5-ff7xc\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.898363 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-config\") pod \"neutron-db-sync-kfzmp\" (UID: \"c4c20fc2-cf78-41c9-9e37-c5bea35d472f\") " pod="openstack/neutron-db-sync-kfzmp" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.898397 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-combined-ca-bundle\") pod \"neutron-db-sync-kfzmp\" (UID: \"c4c20fc2-cf78-41c9-9e37-c5bea35d472f\") " pod="openstack/neutron-db-sync-kfzmp" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.898438 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bd90883-79db-4903-87ab-828b9608f9fa-config-data\") pod \"watcher-applier-0\" (UID: \"5bd90883-79db-4903-87ab-828b9608f9fa\") " pod="openstack/watcher-applier-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.898458 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bd90883-79db-4903-87ab-828b9608f9fa-logs\") pod \"watcher-applier-0\" (UID: \"5bd90883-79db-4903-87ab-828b9608f9fa\") " pod="openstack/watcher-applier-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.898476 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpwbx\" (UniqueName: \"kubernetes.io/projected/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-kube-api-access-mpwbx\") pod \"neutron-db-sync-kfzmp\" (UID: \"c4c20fc2-cf78-41c9-9e37-c5bea35d472f\") " pod="openstack/neutron-db-sync-kfzmp" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.898514 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkk7w\" (UniqueName: \"kubernetes.io/projected/5bd90883-79db-4903-87ab-828b9608f9fa-kube-api-access-jkk7w\") pod \"watcher-applier-0\" (UID: \"5bd90883-79db-4903-87ab-828b9608f9fa\") " pod="openstack/watcher-applier-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.904233 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bd90883-79db-4903-87ab-828b9608f9fa-logs\") pod \"watcher-applier-0\" (UID: \"5bd90883-79db-4903-87ab-828b9608f9fa\") " pod="openstack/watcher-applier-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.907099 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bd90883-79db-4903-87ab-828b9608f9fa-config-data\") pod \"watcher-applier-0\" (UID: \"5bd90883-79db-4903-87ab-828b9608f9fa\") " pod="openstack/watcher-applier-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.915863 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bd90883-79db-4903-87ab-828b9608f9fa-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"5bd90883-79db-4903-87ab-828b9608f9fa\") " pod="openstack/watcher-applier-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.929880 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkk7w\" (UniqueName: \"kubernetes.io/projected/5bd90883-79db-4903-87ab-828b9608f9fa-kube-api-access-jkk7w\") pod \"watcher-applier-0\" (UID: \"5bd90883-79db-4903-87ab-828b9608f9fa\") " pod="openstack/watcher-applier-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.964653 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-nqxxn"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.965703 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.973337 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-sd6v9" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.974793 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.975001 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.989637 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5b4cfbdb9c-hwmr5"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.991180 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.992672 4932 generic.go:334] "Generic (PLEG): container finished" podID="81c5b019-830a-45a5-b05e-22f7aa7e41c7" containerID="ada62a006618a52de9cbdc7e1191675216899513cfdb323ef65c636601133e63" exitCode=0 Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:26.992745 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8f475786f-6jkn9" event={"ID":"81c5b019-830a-45a5-b05e-22f7aa7e41c7","Type":"ContainerDied","Data":"ada62a006618a52de9cbdc7e1191675216899513cfdb323ef65c636601133e63"} Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.001021 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a620c48b-58fa-487f-8997-e2784ddc497b-scripts\") pod \"horizon-67874d8bd5-ff7xc\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.001064 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a620c48b-58fa-487f-8997-e2784ddc497b-config-data\") pod \"horizon-67874d8bd5-ff7xc\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.001110 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfb47\" (UniqueName: \"kubernetes.io/projected/a620c48b-58fa-487f-8997-e2784ddc497b-kube-api-access-lfb47\") pod \"horizon-67874d8bd5-ff7xc\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.001131 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a620c48b-58fa-487f-8997-e2784ddc497b-horizon-secret-key\") pod \"horizon-67874d8bd5-ff7xc\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.001148 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a620c48b-58fa-487f-8997-e2784ddc497b-logs\") pod \"horizon-67874d8bd5-ff7xc\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.001184 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-config\") pod \"neutron-db-sync-kfzmp\" (UID: \"c4c20fc2-cf78-41c9-9e37-c5bea35d472f\") " pod="openstack/neutron-db-sync-kfzmp" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.001213 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-combined-ca-bundle\") pod \"neutron-db-sync-kfzmp\" (UID: \"c4c20fc2-cf78-41c9-9e37-c5bea35d472f\") " pod="openstack/neutron-db-sync-kfzmp" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.001251 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpwbx\" (UniqueName: \"kubernetes.io/projected/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-kube-api-access-mpwbx\") pod \"neutron-db-sync-kfzmp\" (UID: \"c4c20fc2-cf78-41c9-9e37-c5bea35d472f\") " pod="openstack/neutron-db-sync-kfzmp" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.002389 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a620c48b-58fa-487f-8997-e2784ddc497b-logs\") pod \"horizon-67874d8bd5-ff7xc\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.003109 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a620c48b-58fa-487f-8997-e2784ddc497b-config-data\") pod \"horizon-67874d8bd5-ff7xc\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.003543 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a620c48b-58fa-487f-8997-e2784ddc497b-scripts\") pod \"horizon-67874d8bd5-ff7xc\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.010847 4932 generic.go:334] "Generic (PLEG): container finished" podID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerID="81f9d76b429826048a1f76e9841d9bd5c8224e1c54ca1834ee1d11eed8e3afa6" exitCode=0 Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.010883 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f1783f11-a79f-49d9-a637-224863cdb0ad","Type":"ContainerDied","Data":"81f9d76b429826048a1f76e9841d9bd5c8224e1c54ca1834ee1d11eed8e3afa6"} Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.013525 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-combined-ca-bundle\") pod \"neutron-db-sync-kfzmp\" (UID: \"c4c20fc2-cf78-41c9-9e37-c5bea35d472f\") " pod="openstack/neutron-db-sync-kfzmp" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.016848 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a620c48b-58fa-487f-8997-e2784ddc497b-horizon-secret-key\") pod \"horizon-67874d8bd5-ff7xc\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.026445 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-config\") pod \"neutron-db-sync-kfzmp\" (UID: \"c4c20fc2-cf78-41c9-9e37-c5bea35d472f\") " pod="openstack/neutron-db-sync-kfzmp" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.030364 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfb47\" (UniqueName: \"kubernetes.io/projected/a620c48b-58fa-487f-8997-e2784ddc497b-kube-api-access-lfb47\") pod \"horizon-67874d8bd5-ff7xc\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.034407 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b4cfbdb9c-hwmr5"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.049991 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpwbx\" (UniqueName: \"kubernetes.io/projected/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-kube-api-access-mpwbx\") pod \"neutron-db-sync-kfzmp\" (UID: \"c4c20fc2-cf78-41c9-9e37-c5bea35d472f\") " pod="openstack/neutron-db-sync-kfzmp" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.078492 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.079057 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-nqxxn"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.106657 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-combined-ca-bundle\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.106817 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-config-data\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.106890 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4938c577-60aa-45c3-9190-b6e82bcf8b0d-scripts\") pod \"horizon-5b4cfbdb9c-hwmr5\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.107014 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rph76\" (UniqueName: \"kubernetes.io/projected/4938c577-60aa-45c3-9190-b6e82bcf8b0d-kube-api-access-rph76\") pod \"horizon-5b4cfbdb9c-hwmr5\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.107037 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqllv\" (UniqueName: \"kubernetes.io/projected/3f831817-b833-4ee3-b1e9-77d9c02416ed-kube-api-access-qqllv\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.107099 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-db-sync-config-data\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.107122 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4938c577-60aa-45c3-9190-b6e82bcf8b0d-config-data\") pod \"horizon-5b4cfbdb9c-hwmr5\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.107154 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4938c577-60aa-45c3-9190-b6e82bcf8b0d-horizon-secret-key\") pod \"horizon-5b4cfbdb9c-hwmr5\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.107232 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f831817-b833-4ee3-b1e9-77d9c02416ed-etc-machine-id\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.107388 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4938c577-60aa-45c3-9190-b6e82bcf8b0d-logs\") pod \"horizon-5b4cfbdb9c-hwmr5\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.107570 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-scripts\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.151586 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-df7zx"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.152718 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-df7zx" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.156586 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-s8zmw" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.156766 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.156924 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.179649 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-cpzcj"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.180915 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cpzcj" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.210874 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rph76\" (UniqueName: \"kubernetes.io/projected/4938c577-60aa-45c3-9190-b6e82bcf8b0d-kube-api-access-rph76\") pod \"horizon-5b4cfbdb9c-hwmr5\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.210908 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqllv\" (UniqueName: \"kubernetes.io/projected/3f831817-b833-4ee3-b1e9-77d9c02416ed-kube-api-access-qqllv\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.210945 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-db-sync-config-data\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.210977 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4938c577-60aa-45c3-9190-b6e82bcf8b0d-config-data\") pod \"horizon-5b4cfbdb9c-hwmr5\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.210998 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4938c577-60aa-45c3-9190-b6e82bcf8b0d-horizon-secret-key\") pod \"horizon-5b4cfbdb9c-hwmr5\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.211021 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f831817-b833-4ee3-b1e9-77d9c02416ed-etc-machine-id\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.211095 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4938c577-60aa-45c3-9190-b6e82bcf8b0d-logs\") pod \"horizon-5b4cfbdb9c-hwmr5\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.211111 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-scripts\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.211234 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-combined-ca-bundle\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.211264 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-config-data\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.211285 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4938c577-60aa-45c3-9190-b6e82bcf8b0d-scripts\") pod \"horizon-5b4cfbdb9c-hwmr5\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.212081 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4938c577-60aa-45c3-9190-b6e82bcf8b0d-scripts\") pod \"horizon-5b4cfbdb9c-hwmr5\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.217549 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4938c577-60aa-45c3-9190-b6e82bcf8b0d-config-data\") pod \"horizon-5b4cfbdb9c-hwmr5\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.218018 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f831817-b833-4ee3-b1e9-77d9c02416ed-etc-machine-id\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.218620 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.218714 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4938c577-60aa-45c3-9190-b6e82bcf8b0d-logs\") pod \"horizon-5b4cfbdb9c-hwmr5\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.219250 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-sxgcc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.252046 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4938c577-60aa-45c3-9190-b6e82bcf8b0d-horizon-secret-key\") pod \"horizon-5b4cfbdb9c-hwmr5\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.278996 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-scripts\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.280023 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-db-sync-config-data\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.280821 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-combined-ca-bundle\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.286020 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-config-data\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.301523 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rph76\" (UniqueName: \"kubernetes.io/projected/4938c577-60aa-45c3-9190-b6e82bcf8b0d-kube-api-access-rph76\") pod \"horizon-5b4cfbdb9c-hwmr5\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.314430 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/43f771cb-173f-4939-b1d1-e7d1b21834cb-db-sync-config-data\") pod \"barbican-db-sync-cpzcj\" (UID: \"43f771cb-173f-4939-b1d1-e7d1b21834cb\") " pod="openstack/barbican-db-sync-cpzcj" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.314488 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f771cb-173f-4939-b1d1-e7d1b21834cb-combined-ca-bundle\") pod \"barbican-db-sync-cpzcj\" (UID: \"43f771cb-173f-4939-b1d1-e7d1b21834cb\") " pod="openstack/barbican-db-sync-cpzcj" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.314514 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zn59\" (UniqueName: \"kubernetes.io/projected/43f771cb-173f-4939-b1d1-e7d1b21834cb-kube-api-access-4zn59\") pod \"barbican-db-sync-cpzcj\" (UID: \"43f771cb-173f-4939-b1d1-e7d1b21834cb\") " pod="openstack/barbican-db-sync-cpzcj" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.314576 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-scripts\") pod \"placement-db-sync-df7zx\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " pod="openstack/placement-db-sync-df7zx" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.314638 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-combined-ca-bundle\") pod \"placement-db-sync-df7zx\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " pod="openstack/placement-db-sync-df7zx" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.314691 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30efc86e-0c26-42e4-b907-1d4d985912ed-logs\") pod \"placement-db-sync-df7zx\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " pod="openstack/placement-db-sync-df7zx" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.314735 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-config-data\") pod \"placement-db-sync-df7zx\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " pod="openstack/placement-db-sync-df7zx" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.314882 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l4dr\" (UniqueName: \"kubernetes.io/projected/30efc86e-0c26-42e4-b907-1d4d985912ed-kube-api-access-5l4dr\") pod \"placement-db-sync-df7zx\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " pod="openstack/placement-db-sync-df7zx" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.330456 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqllv\" (UniqueName: \"kubernetes.io/projected/3f831817-b833-4ee3-b1e9-77d9c02416ed-kube-api-access-qqllv\") pod \"cinder-db-sync-nqxxn\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.373221 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6644fc979c-bjpxl"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.373658 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-df7zx"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.373700 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-cpzcj"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.373762 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b855db8f7-mh8jh"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.385770 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b855db8f7-mh8jh"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.385867 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.417044 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-ovsdbserver-nb\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.417081 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.417092 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-scripts\") pod \"placement-db-sync-df7zx\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " pod="openstack/placement-db-sync-df7zx" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.417777 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-config\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.417860 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-combined-ca-bundle\") pod \"placement-db-sync-df7zx\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " pod="openstack/placement-db-sync-df7zx" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.417954 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-dns-swift-storage-0\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.417991 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30efc86e-0c26-42e4-b907-1d4d985912ed-logs\") pod \"placement-db-sync-df7zx\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " pod="openstack/placement-db-sync-df7zx" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.418029 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d668s\" (UniqueName: \"kubernetes.io/projected/0affb7f8-ebd4-4d8d-b41c-dd968316038d-kube-api-access-d668s\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.418067 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-config-data\") pod \"placement-db-sync-df7zx\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " pod="openstack/placement-db-sync-df7zx" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.418124 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l4dr\" (UniqueName: \"kubernetes.io/projected/30efc86e-0c26-42e4-b907-1d4d985912ed-kube-api-access-5l4dr\") pod \"placement-db-sync-df7zx\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " pod="openstack/placement-db-sync-df7zx" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.418160 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-ovsdbserver-sb\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.418281 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/43f771cb-173f-4939-b1d1-e7d1b21834cb-db-sync-config-data\") pod \"barbican-db-sync-cpzcj\" (UID: \"43f771cb-173f-4939-b1d1-e7d1b21834cb\") " pod="openstack/barbican-db-sync-cpzcj" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.418342 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f771cb-173f-4939-b1d1-e7d1b21834cb-combined-ca-bundle\") pod \"barbican-db-sync-cpzcj\" (UID: \"43f771cb-173f-4939-b1d1-e7d1b21834cb\") " pod="openstack/barbican-db-sync-cpzcj" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.418386 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zn59\" (UniqueName: \"kubernetes.io/projected/43f771cb-173f-4939-b1d1-e7d1b21834cb-kube-api-access-4zn59\") pod \"barbican-db-sync-cpzcj\" (UID: \"43f771cb-173f-4939-b1d1-e7d1b21834cb\") " pod="openstack/barbican-db-sync-cpzcj" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.418413 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-dns-svc\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.418942 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30efc86e-0c26-42e4-b907-1d4d985912ed-logs\") pod \"placement-db-sync-df7zx\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " pod="openstack/placement-db-sync-df7zx" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.420031 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-scripts\") pod \"placement-db-sync-df7zx\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " pod="openstack/placement-db-sync-df7zx" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.422462 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-combined-ca-bundle\") pod \"placement-db-sync-df7zx\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " pod="openstack/placement-db-sync-df7zx" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.424372 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-config-data\") pod \"placement-db-sync-df7zx\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " pod="openstack/placement-db-sync-df7zx" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.428258 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.429932 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.439398 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.441552 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.445396 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/43f771cb-173f-4939-b1d1-e7d1b21834cb-db-sync-config-data\") pod \"barbican-db-sync-cpzcj\" (UID: \"43f771cb-173f-4939-b1d1-e7d1b21834cb\") " pod="openstack/barbican-db-sync-cpzcj" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.446062 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f771cb-173f-4939-b1d1-e7d1b21834cb-combined-ca-bundle\") pod \"barbican-db-sync-cpzcj\" (UID: \"43f771cb-173f-4939-b1d1-e7d1b21834cb\") " pod="openstack/barbican-db-sync-cpzcj" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.446245 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.446681 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.446873 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.447096 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mx5f7" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.447480 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.448074 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.449057 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.454504 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l4dr\" (UniqueName: \"kubernetes.io/projected/30efc86e-0c26-42e4-b907-1d4d985912ed-kube-api-access-5l4dr\") pod \"placement-db-sync-df7zx\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " pod="openstack/placement-db-sync-df7zx" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.463010 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.467588 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zn59\" (UniqueName: \"kubernetes.io/projected/43f771cb-173f-4939-b1d1-e7d1b21834cb-kube-api-access-4zn59\") pod \"barbican-db-sync-cpzcj\" (UID: \"43f771cb-173f-4939-b1d1-e7d1b21834cb\") " pod="openstack/barbican-db-sync-cpzcj" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.477584 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.486236 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.497431 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kfzmp" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.522529 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-ovsdbserver-nb\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.522643 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-scripts\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.522709 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.522877 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-config\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.522990 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z4dm\" (UniqueName: \"kubernetes.io/projected/a956ae21-8721-4f0a-815f-acb82958ec28-kube-api-access-6z4dm\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.523080 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-dns-swift-storage-0\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.523147 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-config-data\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.523256 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d668s\" (UniqueName: \"kubernetes.io/projected/0affb7f8-ebd4-4d8d-b41c-dd968316038d-kube-api-access-d668s\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.523336 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a956ae21-8721-4f0a-815f-acb82958ec28-logs\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.523414 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.523493 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.523555 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-ovsdbserver-sb\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.523647 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-scripts\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.523738 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgvx4\" (UniqueName: \"kubernetes.io/projected/079e3d7d-bd4f-4198-8606-95192a514c07-kube-api-access-xgvx4\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.523824 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a956ae21-8721-4f0a-815f-acb82958ec28-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.523897 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/079e3d7d-bd4f-4198-8606-95192a514c07-run-httpd\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.523958 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-config-data\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.524060 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/079e3d7d-bd4f-4198-8606-95192a514c07-log-httpd\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.524150 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-dns-svc\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.524228 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.524363 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.527269 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-ovsdbserver-nb\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.528890 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-dns-swift-storage-0\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.528954 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-config\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.529614 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-ovsdbserver-sb\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.534190 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-dns-svc\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.546032 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d668s\" (UniqueName: \"kubernetes.io/projected/0affb7f8-ebd4-4d8d-b41c-dd968316038d-kube-api-access-d668s\") pod \"dnsmasq-dns-7b855db8f7-mh8jh\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.627971 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/079e3d7d-bd4f-4198-8606-95192a514c07-run-httpd\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.628013 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-config-data\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.628030 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/079e3d7d-bd4f-4198-8606-95192a514c07-log-httpd\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.628052 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.628088 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.628111 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-scripts\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.628129 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.628193 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z4dm\" (UniqueName: \"kubernetes.io/projected/a956ae21-8721-4f0a-815f-acb82958ec28-kube-api-access-6z4dm\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.628220 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-config-data\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.628249 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a956ae21-8721-4f0a-815f-acb82958ec28-logs\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.628269 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.628291 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.628318 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-scripts\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.628336 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgvx4\" (UniqueName: \"kubernetes.io/projected/079e3d7d-bd4f-4198-8606-95192a514c07-kube-api-access-xgvx4\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.628364 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a956ae21-8721-4f0a-815f-acb82958ec28-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.629800 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/079e3d7d-bd4f-4198-8606-95192a514c07-run-httpd\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.630280 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a956ae21-8721-4f0a-815f-acb82958ec28-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.630757 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.631355 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/079e3d7d-bd4f-4198-8606-95192a514c07-log-httpd\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.633577 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a956ae21-8721-4f0a-815f-acb82958ec28-logs\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.654294 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.656311 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-config-data\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.674875 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-scripts\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.686035 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.686450 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-config-data\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.686481 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.686552 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.687622 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-scripts\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.690728 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgvx4\" (UniqueName: \"kubernetes.io/projected/079e3d7d-bd4f-4198-8606-95192a514c07-kube-api-access-xgvx4\") pod \"ceilometer-0\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " pod="openstack/ceilometer-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.708112 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z4dm\" (UniqueName: \"kubernetes.io/projected/a956ae21-8721-4f0a-815f-acb82958ec28-kube-api-access-6z4dm\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.713733 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: W0218 19:53:27.785995 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc63ad2af_4b3b_4aa5_a300_06aadeef8149.slice/crio-cac15c7f14ab1bd067bcfa006814fe217ca7694f3478219da3114178dc6d8dae WatchSource:0}: Error finding container cac15c7f14ab1bd067bcfa006814fe217ca7694f3478219da3114178dc6d8dae: Status 404 returned error can't find the container with id cac15c7f14ab1bd067bcfa006814fe217ca7694f3478219da3114178dc6d8dae Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.817666 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pchf4"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.836953 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.837939 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.876230 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.878641 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.880333 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.903517 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.904985 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.905357 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.930120 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cpzcj" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.933162 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.954390 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.954529 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ef7f755-fa76-4e5c-8689-06727a6a9204-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.954640 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.954660 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.954720 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d7ll\" (UniqueName: \"kubernetes.io/projected/4ef7f755-fa76-4e5c-8689-06727a6a9204-kube-api-access-6d7ll\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.954740 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.954779 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.954796 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ef7f755-fa76-4e5c-8689-06727a6a9204-logs\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.967319 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-df7zx" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.977046 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:27 crc kubenswrapper[4932]: I0218 19:53:27.989659 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:27.991840 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.020352 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.046165 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"efabc52d-6f3c-4442-9b80-09577d6d5ed7","Type":"ContainerStarted","Data":"2d6fd36cf5810909c88050cc20c15f847b1b0069bc0b2e13fc22cf63d5c5c033"} Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.052974 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8f475786f-6jkn9" event={"ID":"81c5b019-830a-45a5-b05e-22f7aa7e41c7","Type":"ContainerDied","Data":"40635d0f4580a3bc434b2fd370ad4b541db323a92086fd21f44bf21127a2ea88"} Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.053022 4932 scope.go:117] "RemoveContainer" containerID="ada62a006618a52de9cbdc7e1191675216899513cfdb323ef65c636601133e63" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.053129 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8f475786f-6jkn9" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.055499 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6644fc979c-bjpxl"] Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.059075 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"96fe12c6-435c-4ef9-a340-c15cd050d898","Type":"ContainerStarted","Data":"50a26765d82393ad4f763879251ca7f0c251c1c50f74af99544b44224a950233"} Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.064896 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pchf4" event={"ID":"c63ad2af-4b3b-4aa5-a300-06aadeef8149","Type":"ContainerStarted","Data":"cac15c7f14ab1bd067bcfa006814fe217ca7694f3478219da3114178dc6d8dae"} Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.064913 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.065050 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ef7f755-fa76-4e5c-8689-06727a6a9204-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.065250 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.065281 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.065462 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d7ll\" (UniqueName: \"kubernetes.io/projected/4ef7f755-fa76-4e5c-8689-06727a6a9204-kube-api-access-6d7ll\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.065490 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.065639 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.065688 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ef7f755-fa76-4e5c-8689-06727a6a9204-logs\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.068452 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ef7f755-fa76-4e5c-8689-06727a6a9204-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.071063 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.073442 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.074093 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ef7f755-fa76-4e5c-8689-06727a6a9204-logs\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.090654 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.090714 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.100406 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.105485 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f1783f11-a79f-49d9-a637-224863cdb0ad","Type":"ContainerStarted","Data":"361657e74a3f41f1c11b35878117fcf352b08b255d1c2d6041c3ed746c1fd2c2"} Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.106551 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d7ll\" (UniqueName: \"kubernetes.io/projected/4ef7f755-fa76-4e5c-8689-06727a6a9204-kube-api-access-6d7ll\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.150436 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.166369 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-dns-swift-storage-0\") pod \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.166409 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-ovsdbserver-nb\") pod \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.166449 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-ovsdbserver-sb\") pod \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.166589 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-config\") pod \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.166648 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kb4v\" (UniqueName: \"kubernetes.io/projected/81c5b019-830a-45a5-b05e-22f7aa7e41c7-kube-api-access-6kb4v\") pod \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.166715 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-dns-svc\") pod \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\" (UID: \"81c5b019-830a-45a5-b05e-22f7aa7e41c7\") " Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.216085 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.251877 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81c5b019-830a-45a5-b05e-22f7aa7e41c7-kube-api-access-6kb4v" (OuterVolumeSpecName: "kube-api-access-6kb4v") pod "81c5b019-830a-45a5-b05e-22f7aa7e41c7" (UID: "81c5b019-830a-45a5-b05e-22f7aa7e41c7"). InnerVolumeSpecName "kube-api-access-6kb4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.270240 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kb4v\" (UniqueName: \"kubernetes.io/projected/81c5b019-830a-45a5-b05e-22f7aa7e41c7-kube-api-access-6kb4v\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.271382 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-kfzmp"] Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.290962 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-67874d8bd5-ff7xc"] Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.333156 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.354043 4932 scope.go:117] "RemoveContainer" containerID="dd9cde3170c69a353ec61b65c4f18cbd1534c0a768ecc45c08a9e745323cd132" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.512597 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-nqxxn"] Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.690060 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-df7zx"] Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.698540 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b4cfbdb9c-hwmr5"] Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.848869 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-cpzcj"] Feb 18 19:53:28 crc kubenswrapper[4932]: W0218 19:53:28.877439 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30efc86e_0c26_42e4_b907_1d4d985912ed.slice/crio-7657bac55ccacde4594557141b7b117e70c960cb0019ec4ad053450683538da6 WatchSource:0}: Error finding container 7657bac55ccacde4594557141b7b117e70c960cb0019ec4ad053450683538da6: Status 404 returned error can't find the container with id 7657bac55ccacde4594557141b7b117e70c960cb0019ec4ad053450683538da6 Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.897071 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "81c5b019-830a-45a5-b05e-22f7aa7e41c7" (UID: "81c5b019-830a-45a5-b05e-22f7aa7e41c7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.906590 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "81c5b019-830a-45a5-b05e-22f7aa7e41c7" (UID: "81c5b019-830a-45a5-b05e-22f7aa7e41c7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.929796 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "81c5b019-830a-45a5-b05e-22f7aa7e41c7" (UID: "81c5b019-830a-45a5-b05e-22f7aa7e41c7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.933079 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-config" (OuterVolumeSpecName: "config") pod "81c5b019-830a-45a5-b05e-22f7aa7e41c7" (UID: "81c5b019-830a-45a5-b05e-22f7aa7e41c7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.938936 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "81c5b019-830a-45a5-b05e-22f7aa7e41c7" (UID: "81c5b019-830a-45a5-b05e-22f7aa7e41c7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.976564 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b855db8f7-mh8jh"] Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.989025 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.992594 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.992617 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.992627 4932 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.992638 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:28 crc kubenswrapper[4932]: I0218 19:53:28.992646 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81c5b019-830a-45a5-b05e-22f7aa7e41c7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.064385 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8f475786f-6jkn9"] Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.071949 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8f475786f-6jkn9"] Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.079008 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.219924 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81c5b019-830a-45a5-b05e-22f7aa7e41c7" path="/var/lib/kubelet/pods/81c5b019-830a-45a5-b05e-22f7aa7e41c7/volumes" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.220497 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kfzmp" event={"ID":"c4c20fc2-cf78-41c9-9e37-c5bea35d472f","Type":"ContainerStarted","Data":"d6c505a399db7407167ba85b30249143bd9bde443aac40b322a8f403af6c7869"} Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.220522 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.241986 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"5bd90883-79db-4903-87ab-828b9608f9fa","Type":"ContainerStarted","Data":"df7e1feb306b3e43a9f10b16516d4c855aa78c2e70283552aa8d3546e3dee111"} Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.243529 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pchf4" event={"ID":"c63ad2af-4b3b-4aa5-a300-06aadeef8149","Type":"ContainerStarted","Data":"502e6556feede81a431352bd255101dc0919dfeb0d3696054c3aff0523a4cd61"} Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.246403 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.252914 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" event={"ID":"9d8f2367-684b-453b-bd7a-4d93e021885c","Type":"ContainerStarted","Data":"e47c7606f816972b032cc244cce055d96313af205e2299f5ab36bbb071939e87"} Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.285433 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b4cfbdb9c-hwmr5" event={"ID":"4938c577-60aa-45c3-9190-b6e82bcf8b0d","Type":"ContainerStarted","Data":"449b65cc6eee0acc18bb77293bfac087ad9d12fb9f06318dfdbe198587c35eda"} Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.298773 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67874d8bd5-ff7xc" event={"ID":"a620c48b-58fa-487f-8997-e2784ddc497b","Type":"ContainerStarted","Data":"3db1ad470af452257972c4a5c8d1fb2ee8875e24f72fe068e89046c3a5a557ce"} Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.328280 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-pchf4" podStartSLOduration=3.328256224 podStartE2EDuration="3.328256224s" podCreationTimestamp="2026-02-18 19:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:53:29.270995114 +0000 UTC m=+1172.852949959" watchObservedRunningTime="2026-02-18 19:53:29.328256224 +0000 UTC m=+1172.910211069" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.330419 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b4cfbdb9c-hwmr5"] Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.348304 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nqxxn" event={"ID":"3f831817-b833-4ee3-b1e9-77d9c02416ed","Type":"ContainerStarted","Data":"e5dc27f7492f1faa0455250ffd7868de8258df87b7d776e52911e76784a162ec"} Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.356598 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-df7zx" event={"ID":"30efc86e-0c26-42e4-b907-1d4d985912ed","Type":"ContainerStarted","Data":"7657bac55ccacde4594557141b7b117e70c960cb0019ec4ad053450683538da6"} Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.362481 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.373355 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-644d9bbcf7-chs9h"] Feb 18 19:53:29 crc kubenswrapper[4932]: E0218 19:53:29.374755 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81c5b019-830a-45a5-b05e-22f7aa7e41c7" containerName="dnsmasq-dns" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.374780 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="81c5b019-830a-45a5-b05e-22f7aa7e41c7" containerName="dnsmasq-dns" Feb 18 19:53:29 crc kubenswrapper[4932]: E0218 19:53:29.374819 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81c5b019-830a-45a5-b05e-22f7aa7e41c7" containerName="init" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.374828 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="81c5b019-830a-45a5-b05e-22f7aa7e41c7" containerName="init" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.375960 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="81c5b019-830a-45a5-b05e-22f7aa7e41c7" containerName="dnsmasq-dns" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.378366 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.398666 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.443698 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-644d9bbcf7-chs9h"] Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.487884 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.505418 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62ssm\" (UniqueName: \"kubernetes.io/projected/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-kube-api-access-62ssm\") pod \"horizon-644d9bbcf7-chs9h\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.505518 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-logs\") pod \"horizon-644d9bbcf7-chs9h\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.505549 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-config-data\") pod \"horizon-644d9bbcf7-chs9h\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.505566 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-scripts\") pod \"horizon-644d9bbcf7-chs9h\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.505603 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-horizon-secret-key\") pod \"horizon-644d9bbcf7-chs9h\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.607426 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62ssm\" (UniqueName: \"kubernetes.io/projected/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-kube-api-access-62ssm\") pod \"horizon-644d9bbcf7-chs9h\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.607537 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-logs\") pod \"horizon-644d9bbcf7-chs9h\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.607566 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-config-data\") pod \"horizon-644d9bbcf7-chs9h\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.607584 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-scripts\") pod \"horizon-644d9bbcf7-chs9h\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.607624 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-horizon-secret-key\") pod \"horizon-644d9bbcf7-chs9h\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.611523 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-logs\") pod \"horizon-644d9bbcf7-chs9h\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.614252 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-config-data\") pod \"horizon-644d9bbcf7-chs9h\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.614746 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-scripts\") pod \"horizon-644d9bbcf7-chs9h\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.615513 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-horizon-secret-key\") pod \"horizon-644d9bbcf7-chs9h\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.643161 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62ssm\" (UniqueName: \"kubernetes.io/projected/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-kube-api-access-62ssm\") pod \"horizon-644d9bbcf7-chs9h\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:53:29 crc kubenswrapper[4932]: I0218 19:53:29.742345 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:53:30 crc kubenswrapper[4932]: W0218 19:53:30.088340 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43f771cb_173f_4939_b1d1_e7d1b21834cb.slice/crio-5c69e97847efcde57a769daf96ea0750cda2a27a34d2c7d54166590315ebcbc1 WatchSource:0}: Error finding container 5c69e97847efcde57a769daf96ea0750cda2a27a34d2c7d54166590315ebcbc1: Status 404 returned error can't find the container with id 5c69e97847efcde57a769daf96ea0750cda2a27a34d2c7d54166590315ebcbc1 Feb 18 19:53:30 crc kubenswrapper[4932]: W0218 19:53:30.106716 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda956ae21_8721_4f0a_815f_acb82958ec28.slice/crio-4b701277c70d3fe2d0fb203a8c7965ad1fb9d840ac2b0ced500b37db0043c874 WatchSource:0}: Error finding container 4b701277c70d3fe2d0fb203a8c7965ad1fb9d840ac2b0ced500b37db0043c874: Status 404 returned error can't find the container with id 4b701277c70d3fe2d0fb203a8c7965ad1fb9d840ac2b0ced500b37db0043c874 Feb 18 19:53:30 crc kubenswrapper[4932]: W0218 19:53:30.128287 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod079e3d7d_bd4f_4198_8606_95192a514c07.slice/crio-fee032ffa8aa1dbfcab87d2f666d06dce9f00f11a46c1ed8dccaedd7a3ae0ea4 WatchSource:0}: Error finding container fee032ffa8aa1dbfcab87d2f666d06dce9f00f11a46c1ed8dccaedd7a3ae0ea4: Status 404 returned error can't find the container with id fee032ffa8aa1dbfcab87d2f666d06dce9f00f11a46c1ed8dccaedd7a3ae0ea4 Feb 18 19:53:30 crc kubenswrapper[4932]: I0218 19:53:30.370191 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"079e3d7d-bd4f-4198-8606-95192a514c07","Type":"ContainerStarted","Data":"fee032ffa8aa1dbfcab87d2f666d06dce9f00f11a46c1ed8dccaedd7a3ae0ea4"} Feb 18 19:53:30 crc kubenswrapper[4932]: I0218 19:53:30.372563 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cpzcj" event={"ID":"43f771cb-173f-4939-b1d1-e7d1b21834cb","Type":"ContainerStarted","Data":"5c69e97847efcde57a769daf96ea0750cda2a27a34d2c7d54166590315ebcbc1"} Feb 18 19:53:30 crc kubenswrapper[4932]: I0218 19:53:30.374702 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a956ae21-8721-4f0a-815f-acb82958ec28","Type":"ContainerStarted","Data":"4b701277c70d3fe2d0fb203a8c7965ad1fb9d840ac2b0ced500b37db0043c874"} Feb 18 19:53:30 crc kubenswrapper[4932]: I0218 19:53:30.377037 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4ef7f755-fa76-4e5c-8689-06727a6a9204","Type":"ContainerStarted","Data":"7726c4f68af3477b632315682a36e3711c9d3bff8965ae81fe2c0dd5455b7980"} Feb 18 19:53:30 crc kubenswrapper[4932]: I0218 19:53:30.380132 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" event={"ID":"0affb7f8-ebd4-4d8d-b41c-dd968316038d","Type":"ContainerStarted","Data":"58e783f05bfc925c4081556f019c7c54bdb33f3d7590e9cb651eb5ff2a823274"} Feb 18 19:53:31 crc kubenswrapper[4932]: I0218 19:53:31.392667 4932 generic.go:334] "Generic (PLEG): container finished" podID="9d8f2367-684b-453b-bd7a-4d93e021885c" containerID="73b2fccbbe9db45c39c7f1f9fbfc786fb36384c6ef170b3e0e90d0b3358912a1" exitCode=0 Feb 18 19:53:31 crc kubenswrapper[4932]: I0218 19:53:31.392917 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" event={"ID":"9d8f2367-684b-453b-bd7a-4d93e021885c","Type":"ContainerDied","Data":"73b2fccbbe9db45c39c7f1f9fbfc786fb36384c6ef170b3e0e90d0b3358912a1"} Feb 18 19:53:31 crc kubenswrapper[4932]: I0218 19:53:31.404916 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f1783f11-a79f-49d9-a637-224863cdb0ad","Type":"ContainerStarted","Data":"d0b5bb5f9b3d94768e061de73d45369ab8df4d6880aaa6f295ec1ea349cbcc2b"} Feb 18 19:53:31 crc kubenswrapper[4932]: I0218 19:53:31.416266 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"efabc52d-6f3c-4442-9b80-09577d6d5ed7","Type":"ContainerStarted","Data":"21f540805a94ed439a7fc5568d03546bf5918b51410b050c0717633a77e5be9d"} Feb 18 19:53:31 crc kubenswrapper[4932]: I0218 19:53:31.418110 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kfzmp" event={"ID":"c4c20fc2-cf78-41c9-9e37-c5bea35d472f","Type":"ContainerStarted","Data":"682f69e31fcb10c9b585e4fbecb1e2d4f8e82e3ec0c03204e9e0fefc1d901753"} Feb 18 19:53:31 crc kubenswrapper[4932]: I0218 19:53:31.438728 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-kfzmp" podStartSLOduration=5.438709782 podStartE2EDuration="5.438709782s" podCreationTimestamp="2026-02-18 19:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:53:31.433377741 +0000 UTC m=+1175.015332586" watchObservedRunningTime="2026-02-18 19:53:31.438709782 +0000 UTC m=+1175.020664627" Feb 18 19:53:31 crc kubenswrapper[4932]: I0218 19:53:31.951060 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.070607 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-dns-swift-storage-0\") pod \"9d8f2367-684b-453b-bd7a-4d93e021885c\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.070724 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-dns-svc\") pod \"9d8f2367-684b-453b-bd7a-4d93e021885c\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.070746 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-config\") pod \"9d8f2367-684b-453b-bd7a-4d93e021885c\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.070822 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-ovsdbserver-nb\") pod \"9d8f2367-684b-453b-bd7a-4d93e021885c\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.071082 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z8bq\" (UniqueName: \"kubernetes.io/projected/9d8f2367-684b-453b-bd7a-4d93e021885c-kube-api-access-8z8bq\") pod \"9d8f2367-684b-453b-bd7a-4d93e021885c\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.071129 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-ovsdbserver-sb\") pod \"9d8f2367-684b-453b-bd7a-4d93e021885c\" (UID: \"9d8f2367-684b-453b-bd7a-4d93e021885c\") " Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.109874 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d8f2367-684b-453b-bd7a-4d93e021885c-kube-api-access-8z8bq" (OuterVolumeSpecName: "kube-api-access-8z8bq") pod "9d8f2367-684b-453b-bd7a-4d93e021885c" (UID: "9d8f2367-684b-453b-bd7a-4d93e021885c"). InnerVolumeSpecName "kube-api-access-8z8bq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.127895 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9d8f2367-684b-453b-bd7a-4d93e021885c" (UID: "9d8f2367-684b-453b-bd7a-4d93e021885c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.143525 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9d8f2367-684b-453b-bd7a-4d93e021885c" (UID: "9d8f2367-684b-453b-bd7a-4d93e021885c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.145961 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9d8f2367-684b-453b-bd7a-4d93e021885c" (UID: "9d8f2367-684b-453b-bd7a-4d93e021885c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.146998 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9d8f2367-684b-453b-bd7a-4d93e021885c" (UID: "9d8f2367-684b-453b-bd7a-4d93e021885c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.151527 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-config" (OuterVolumeSpecName: "config") pod "9d8f2367-684b-453b-bd7a-4d93e021885c" (UID: "9d8f2367-684b-453b-bd7a-4d93e021885c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.190436 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z8bq\" (UniqueName: \"kubernetes.io/projected/9d8f2367-684b-453b-bd7a-4d93e021885c-kube-api-access-8z8bq\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.190467 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.190476 4932 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.190487 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.190496 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.190505 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d8f2367-684b-453b-bd7a-4d93e021885c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.432833 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" event={"ID":"9d8f2367-684b-453b-bd7a-4d93e021885c","Type":"ContainerDied","Data":"e47c7606f816972b032cc244cce055d96313af205e2299f5ab36bbb071939e87"} Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.432884 4932 scope.go:117] "RemoveContainer" containerID="73b2fccbbe9db45c39c7f1f9fbfc786fb36384c6ef170b3e0e90d0b3358912a1" Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.432844 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6644fc979c-bjpxl" Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.475082 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-644d9bbcf7-chs9h"] Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.496225 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6644fc979c-bjpxl"] Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.503507 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6644fc979c-bjpxl"] Feb 18 19:53:32 crc kubenswrapper[4932]: I0218 19:53:32.818299 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8f475786f-6jkn9" podUID="81c5b019-830a-45a5-b05e-22f7aa7e41c7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.147:5353: i/o timeout" Feb 18 19:53:33 crc kubenswrapper[4932]: I0218 19:53:33.194853 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d8f2367-684b-453b-bd7a-4d93e021885c" path="/var/lib/kubelet/pods/9d8f2367-684b-453b-bd7a-4d93e021885c/volumes" Feb 18 19:53:35 crc kubenswrapper[4932]: I0218 19:53:35.463308 4932 generic.go:334] "Generic (PLEG): container finished" podID="c63ad2af-4b3b-4aa5-a300-06aadeef8149" containerID="502e6556feede81a431352bd255101dc0919dfeb0d3696054c3aff0523a4cd61" exitCode=0 Feb 18 19:53:35 crc kubenswrapper[4932]: I0218 19:53:35.463405 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pchf4" event={"ID":"c63ad2af-4b3b-4aa5-a300-06aadeef8149","Type":"ContainerDied","Data":"502e6556feede81a431352bd255101dc0919dfeb0d3696054c3aff0523a4cd61"} Feb 18 19:53:35 crc kubenswrapper[4932]: I0218 19:53:35.465388 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-644d9bbcf7-chs9h" event={"ID":"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf","Type":"ContainerStarted","Data":"64958cb64aa641fc969187f742c63571ece0fcc99f90f916c984ba259dcd59e7"} Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.366098 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-67874d8bd5-ff7xc"] Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.414063 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-75df984768-5mv9k"] Feb 18 19:53:36 crc kubenswrapper[4932]: E0218 19:53:36.414567 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d8f2367-684b-453b-bd7a-4d93e021885c" containerName="init" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.414580 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d8f2367-684b-453b-bd7a-4d93e021885c" containerName="init" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.414765 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d8f2367-684b-453b-bd7a-4d93e021885c" containerName="init" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.417302 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.421538 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.430380 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75df984768-5mv9k"] Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.467741 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-644d9bbcf7-chs9h"] Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.486842 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6877c868f8-jvwwn"] Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.488328 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.521909 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6877c868f8-jvwwn"] Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.589546 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dec0e208-2bfc-4661-8395-c56418bb9307-scripts\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.589592 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-horizon-tls-certs\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.589614 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dec0e208-2bfc-4661-8395-c56418bb9307-logs\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.589651 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-logs\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.589849 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-config-data\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.589890 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-horizon-tls-certs\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.589955 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dec0e208-2bfc-4661-8395-c56418bb9307-config-data\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.589972 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-scripts\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.590113 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v96cw\" (UniqueName: \"kubernetes.io/projected/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-kube-api-access-v96cw\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.590198 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-combined-ca-bundle\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.590235 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-horizon-secret-key\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.590254 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-combined-ca-bundle\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.590276 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-horizon-secret-key\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.590292 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h566q\" (UniqueName: \"kubernetes.io/projected/dec0e208-2bfc-4661-8395-c56418bb9307-kube-api-access-h566q\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.692741 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dec0e208-2bfc-4661-8395-c56418bb9307-scripts\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.692807 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-horizon-tls-certs\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.692829 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dec0e208-2bfc-4661-8395-c56418bb9307-logs\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.693021 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-logs\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.693377 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-logs\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.693468 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-config-data\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.693489 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-horizon-tls-certs\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.693755 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dec0e208-2bfc-4661-8395-c56418bb9307-config-data\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.693770 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-scripts\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.693801 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v96cw\" (UniqueName: \"kubernetes.io/projected/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-kube-api-access-v96cw\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.693830 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-combined-ca-bundle\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.696261 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-horizon-secret-key\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.696296 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-combined-ca-bundle\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.696332 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h566q\" (UniqueName: \"kubernetes.io/projected/dec0e208-2bfc-4661-8395-c56418bb9307-kube-api-access-h566q\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.696348 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-horizon-secret-key\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.695712 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dec0e208-2bfc-4661-8395-c56418bb9307-config-data\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.694289 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dec0e208-2bfc-4661-8395-c56418bb9307-logs\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.694703 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-config-data\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.695091 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-scripts\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.694418 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dec0e208-2bfc-4661-8395-c56418bb9307-scripts\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.701104 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-horizon-secret-key\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.702404 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-horizon-secret-key\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.702503 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-horizon-tls-certs\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.705453 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-combined-ca-bundle\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.708887 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-combined-ca-bundle\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.710757 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v96cw\" (UniqueName: \"kubernetes.io/projected/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-kube-api-access-v96cw\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.721772 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h566q\" (UniqueName: \"kubernetes.io/projected/dec0e208-2bfc-4661-8395-c56418bb9307-kube-api-access-h566q\") pod \"horizon-75df984768-5mv9k\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.736004 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/90dd0ecb-25a6-463a-a0d8-187c5c5478c5-horizon-tls-certs\") pod \"horizon-6877c868f8-jvwwn\" (UID: \"90dd0ecb-25a6-463a-a0d8-187c5c5478c5\") " pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.758252 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:53:36 crc kubenswrapper[4932]: I0218 19:53:36.809888 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:53:37 crc kubenswrapper[4932]: I0218 19:53:37.307500 4932 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podf7988cea-6aa8-4552-8965-04b417c91831"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podf7988cea-6aa8-4552-8965-04b417c91831] : Timed out while waiting for systemd to remove kubepods-besteffort-podf7988cea_6aa8_4552_8965_04b417c91831.slice" Feb 18 19:53:37 crc kubenswrapper[4932]: E0218 19:53:37.307750 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podf7988cea-6aa8-4552-8965-04b417c91831] : unable to destroy cgroup paths for cgroup [kubepods besteffort podf7988cea-6aa8-4552-8965-04b417c91831] : Timed out while waiting for systemd to remove kubepods-besteffort-podf7988cea_6aa8_4552_8965_04b417c91831.slice" pod="openstack/cinder-db-create-hn6qq" podUID="f7988cea-6aa8-4552-8965-04b417c91831" Feb 18 19:53:37 crc kubenswrapper[4932]: I0218 19:53:37.507532 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hn6qq" Feb 18 19:53:44 crc kubenswrapper[4932]: I0218 19:53:44.572513 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4ef7f755-fa76-4e5c-8689-06727a6a9204","Type":"ContainerStarted","Data":"e42b6cf62ee0a84f0660d6bd0e0803f31a5ed60ee2064a6fb1ff3db60b38d545"} Feb 18 19:53:44 crc kubenswrapper[4932]: I0218 19:53:44.575354 4932 generic.go:334] "Generic (PLEG): container finished" podID="0affb7f8-ebd4-4d8d-b41c-dd968316038d" containerID="6643cdfc456b2b994b88e3fc8de96cb24f5e66ea517d0846fe9ab3b3661927d4" exitCode=0 Feb 18 19:53:44 crc kubenswrapper[4932]: I0218 19:53:44.575414 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" event={"ID":"0affb7f8-ebd4-4d8d-b41c-dd968316038d","Type":"ContainerDied","Data":"6643cdfc456b2b994b88e3fc8de96cb24f5e66ea517d0846fe9ab3b3661927d4"} Feb 18 19:53:44 crc kubenswrapper[4932]: I0218 19:53:44.579275 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f1783f11-a79f-49d9-a637-224863cdb0ad","Type":"ContainerStarted","Data":"87593181676f68ce6f705683e7d0d7ac8f773d82d9f3858c223d1a3115fbc1c5"} Feb 18 19:53:44 crc kubenswrapper[4932]: I0218 19:53:44.636407 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=33.636388244 podStartE2EDuration="33.636388244s" podCreationTimestamp="2026-02-18 19:53:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:53:44.631684008 +0000 UTC m=+1188.213638863" watchObservedRunningTime="2026-02-18 19:53:44.636388244 +0000 UTC m=+1188.218343099" Feb 18 19:53:47 crc kubenswrapper[4932]: I0218 19:53:47.141576 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.166896 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.211666 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-config-data\") pod \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.211717 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-scripts\") pod \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.211743 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9ghl\" (UniqueName: \"kubernetes.io/projected/c63ad2af-4b3b-4aa5-a300-06aadeef8149-kube-api-access-c9ghl\") pod \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.211786 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-fernet-keys\") pod \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.211923 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-combined-ca-bundle\") pod \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.212047 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-credential-keys\") pod \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\" (UID: \"c63ad2af-4b3b-4aa5-a300-06aadeef8149\") " Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.216735 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c63ad2af-4b3b-4aa5-a300-06aadeef8149-kube-api-access-c9ghl" (OuterVolumeSpecName: "kube-api-access-c9ghl") pod "c63ad2af-4b3b-4aa5-a300-06aadeef8149" (UID: "c63ad2af-4b3b-4aa5-a300-06aadeef8149"). InnerVolumeSpecName "kube-api-access-c9ghl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.222291 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c63ad2af-4b3b-4aa5-a300-06aadeef8149" (UID: "c63ad2af-4b3b-4aa5-a300-06aadeef8149"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.222383 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c63ad2af-4b3b-4aa5-a300-06aadeef8149" (UID: "c63ad2af-4b3b-4aa5-a300-06aadeef8149"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.222409 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-scripts" (OuterVolumeSpecName: "scripts") pod "c63ad2af-4b3b-4aa5-a300-06aadeef8149" (UID: "c63ad2af-4b3b-4aa5-a300-06aadeef8149"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.234228 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c63ad2af-4b3b-4aa5-a300-06aadeef8149" (UID: "c63ad2af-4b3b-4aa5-a300-06aadeef8149"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.240371 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-config-data" (OuterVolumeSpecName: "config-data") pod "c63ad2af-4b3b-4aa5-a300-06aadeef8149" (UID: "c63ad2af-4b3b-4aa5-a300-06aadeef8149"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.314449 4932 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.314490 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.314502 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9ghl\" (UniqueName: \"kubernetes.io/projected/c63ad2af-4b3b-4aa5-a300-06aadeef8149-kube-api-access-c9ghl\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.314517 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.314528 4932 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.314569 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c63ad2af-4b3b-4aa5-a300-06aadeef8149-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:52 crc kubenswrapper[4932]: E0218 19:53:52.462502 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.58:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Feb 18 19:53:52 crc kubenswrapper[4932]: E0218 19:53:52.463129 4932 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.58:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Feb 18 19:53:52 crc kubenswrapper[4932]: E0218 19:53:52.463522 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:38.102.83.58:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n649h66fh65ch5cdh588h75h5fch669hffh577h56ch57dh55h5f9h5ddhb6h577h555h88h594h58fh598h87h564h64dh5c8h5c4h55fh64h557h54h575q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xgvx4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(079e3d7d-bd4f-4198-8606-95192a514c07): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.651133 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pchf4" event={"ID":"c63ad2af-4b3b-4aa5-a300-06aadeef8149","Type":"ContainerDied","Data":"cac15c7f14ab1bd067bcfa006814fe217ca7694f3478219da3114178dc6d8dae"} Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.651196 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cac15c7f14ab1bd067bcfa006814fe217ca7694f3478219da3114178dc6d8dae" Feb 18 19:53:52 crc kubenswrapper[4932]: I0218 19:53:52.651239 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pchf4" Feb 18 19:53:53 crc kubenswrapper[4932]: E0218 19:53:53.108033 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.58:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Feb 18 19:53:53 crc kubenswrapper[4932]: E0218 19:53:53.108074 4932 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.58:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Feb 18 19:53:53 crc kubenswrapper[4932]: E0218 19:53:53.108182 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:38.102.83.58:5001/podified-master-centos10/openstack-barbican-api:watcher_latest,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zn59,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-cpzcj_openstack(43f771cb-173f-4939-b1d1-e7d1b21834cb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:53:53 crc kubenswrapper[4932]: E0218 19:53:53.109429 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-cpzcj" podUID="43f771cb-173f-4939-b1d1-e7d1b21834cb" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.259854 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-pchf4"] Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.270694 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-pchf4"] Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.366468 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vldrp"] Feb 18 19:53:53 crc kubenswrapper[4932]: E0218 19:53:53.366988 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c63ad2af-4b3b-4aa5-a300-06aadeef8149" containerName="keystone-bootstrap" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.367009 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="c63ad2af-4b3b-4aa5-a300-06aadeef8149" containerName="keystone-bootstrap" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.367160 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="c63ad2af-4b3b-4aa5-a300-06aadeef8149" containerName="keystone-bootstrap" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.367776 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.375875 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sk7x7" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.376484 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.377066 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.378016 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.378433 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.410232 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vldrp"] Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.437559 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-combined-ca-bundle\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.437634 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k42qv\" (UniqueName: \"kubernetes.io/projected/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-kube-api-access-k42qv\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.437784 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-config-data\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.437885 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-scripts\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.437939 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-fernet-keys\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.437967 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-credential-keys\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.539563 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-scripts\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.539637 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-fernet-keys\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.539662 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-credential-keys\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.539699 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-combined-ca-bundle\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.539722 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k42qv\" (UniqueName: \"kubernetes.io/projected/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-kube-api-access-k42qv\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.539768 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-config-data\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.547742 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-fernet-keys\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.548069 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-credential-keys\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.548742 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-scripts\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.549754 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-config-data\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.552639 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-combined-ca-bundle\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.556895 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k42qv\" (UniqueName: \"kubernetes.io/projected/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-kube-api-access-k42qv\") pod \"keystone-bootstrap-vldrp\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.668370 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a956ae21-8721-4f0a-815f-acb82958ec28","Type":"ContainerStarted","Data":"52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f"} Feb 18 19:53:53 crc kubenswrapper[4932]: E0218 19:53:53.671125 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.58:5001/podified-master-centos10/openstack-barbican-api:watcher_latest\\\"\"" pod="openstack/barbican-db-sync-cpzcj" podUID="43f771cb-173f-4939-b1d1-e7d1b21834cb" Feb 18 19:53:53 crc kubenswrapper[4932]: I0218 19:53:53.707312 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:53:54 crc kubenswrapper[4932]: E0218 19:53:54.511614 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.58:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Feb 18 19:53:54 crc kubenswrapper[4932]: E0218 19:53:54.511937 4932 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.58:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Feb 18 19:53:54 crc kubenswrapper[4932]: E0218 19:53:54.512079 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:38.102.83.58:5001/podified-master-centos10/openstack-cinder-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qqllv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-nqxxn_openstack(3f831817-b833-4ee3-b1e9-77d9c02416ed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:53:54 crc kubenswrapper[4932]: E0218 19:53:54.513308 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-nqxxn" podUID="3f831817-b833-4ee3-b1e9-77d9c02416ed" Feb 18 19:53:54 crc kubenswrapper[4932]: E0218 19:53:54.676792 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.58:5001/podified-master-centos10/openstack-cinder-api:watcher_latest\\\"\"" pod="openstack/cinder-db-sync-nqxxn" podUID="3f831817-b833-4ee3-b1e9-77d9c02416ed" Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.013900 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6877c868f8-jvwwn"] Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.033734 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75df984768-5mv9k"] Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.165622 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vldrp"] Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.190411 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c63ad2af-4b3b-4aa5-a300-06aadeef8149" path="/var/lib/kubelet/pods/c63ad2af-4b3b-4aa5-a300-06aadeef8149/volumes" Feb 18 19:53:55 crc kubenswrapper[4932]: W0218 19:53:55.435514 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddec0e208_2bfc_4661_8395_c56418bb9307.slice/crio-0a23db9200dc7e24b7810e1e26b3a65a213a638cce894066f30cf730bad21368 WatchSource:0}: Error finding container 0a23db9200dc7e24b7810e1e26b3a65a213a638cce894066f30cf730bad21368: Status 404 returned error can't find the container with id 0a23db9200dc7e24b7810e1e26b3a65a213a638cce894066f30cf730bad21368 Feb 18 19:53:55 crc kubenswrapper[4932]: W0218 19:53:55.437205 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90dd0ecb_25a6_463a_a0d8_187c5c5478c5.slice/crio-2ab6f11029940ee49ae4bb7e60b6c065c2cfba299deef17e59e03c9b54af4829 WatchSource:0}: Error finding container 2ab6f11029940ee49ae4bb7e60b6c065c2cfba299deef17e59e03c9b54af4829: Status 404 returned error can't find the container with id 2ab6f11029940ee49ae4bb7e60b6c065c2cfba299deef17e59e03c9b54af4829 Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.706786 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"96fe12c6-435c-4ef9-a340-c15cd050d898","Type":"ContainerStarted","Data":"dbdae9f53819c07d29e95823430d3cc7a7fe94e92688f6b0895ae6c060733453"} Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.711429 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"5bd90883-79db-4903-87ab-828b9608f9fa","Type":"ContainerStarted","Data":"fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746"} Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.713052 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vldrp" event={"ID":"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd","Type":"ContainerStarted","Data":"8e36575f312ac74d40b63b16208afa722288494a47294670fd9808ea408dc232"} Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.717918 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6877c868f8-jvwwn" event={"ID":"90dd0ecb-25a6-463a-a0d8-187c5c5478c5","Type":"ContainerStarted","Data":"2ab6f11029940ee49ae4bb7e60b6c065c2cfba299deef17e59e03c9b54af4829"} Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.721169 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"efabc52d-6f3c-4442-9b80-09577d6d5ed7","Type":"ContainerStarted","Data":"3bb5786715d4653ff11b29e662c3a16b899ce26d5a3ffbce47843577ab6828a2"} Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.721348 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="efabc52d-6f3c-4442-9b80-09577d6d5ed7" containerName="watcher-api-log" containerID="cri-o://21f540805a94ed439a7fc5568d03546bf5918b51410b050c0717633a77e5be9d" gracePeriod=30 Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.721742 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="efabc52d-6f3c-4442-9b80-09577d6d5ed7" containerName="watcher-api" containerID="cri-o://3bb5786715d4653ff11b29e662c3a16b899ce26d5a3ffbce47843577ab6828a2" gracePeriod=30 Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.721977 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.768413 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=25.597982618 podStartE2EDuration="29.768391951s" podCreationTimestamp="2026-02-18 19:53:26 +0000 UTC" firstStartedPulling="2026-02-18 19:53:27.84827548 +0000 UTC m=+1171.430230315" lastFinishedPulling="2026-02-18 19:53:32.018684803 +0000 UTC m=+1175.600639648" observedRunningTime="2026-02-18 19:53:55.725692079 +0000 UTC m=+1199.307646924" watchObservedRunningTime="2026-02-18 19:53:55.768391951 +0000 UTC m=+1199.350346796" Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.776616 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="efabc52d-6f3c-4442-9b80-09577d6d5ed7" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.150:9322/\": EOF" Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.814106 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=29.814088896 podStartE2EDuration="29.814088896s" podCreationTimestamp="2026-02-18 19:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:53:55.806602532 +0000 UTC m=+1199.388557377" watchObservedRunningTime="2026-02-18 19:53:55.814088896 +0000 UTC m=+1199.396043741" Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.826671 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4ef7f755-fa76-4e5c-8689-06727a6a9204" containerName="glance-log" containerID="cri-o://e42b6cf62ee0a84f0660d6bd0e0803f31a5ed60ee2064a6fb1ff3db60b38d545" gracePeriod=30 Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.826837 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4ef7f755-fa76-4e5c-8689-06727a6a9204","Type":"ContainerStarted","Data":"3485f1bd76a9fbf8fa572bdcacbfb0c9029328eeea0173e700694eb380d91d42"} Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.826889 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4ef7f755-fa76-4e5c-8689-06727a6a9204" containerName="glance-httpd" containerID="cri-o://3485f1bd76a9fbf8fa572bdcacbfb0c9029328eeea0173e700694eb380d91d42" gracePeriod=30 Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.851039 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75df984768-5mv9k" event={"ID":"dec0e208-2bfc-4661-8395-c56418bb9307","Type":"ContainerStarted","Data":"0a23db9200dc7e24b7810e1e26b3a65a213a638cce894066f30cf730bad21368"} Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.875418 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=6.186353563 podStartE2EDuration="29.875400816s" podCreationTimestamp="2026-02-18 19:53:26 +0000 UTC" firstStartedPulling="2026-02-18 19:53:28.324854796 +0000 UTC m=+1171.906809651" lastFinishedPulling="2026-02-18 19:53:52.013902029 +0000 UTC m=+1195.595856904" observedRunningTime="2026-02-18 19:53:55.840451505 +0000 UTC m=+1199.422406350" watchObservedRunningTime="2026-02-18 19:53:55.875400816 +0000 UTC m=+1199.457355661" Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.889515 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" event={"ID":"0affb7f8-ebd4-4d8d-b41c-dd968316038d","Type":"ContainerStarted","Data":"2d361f5ee2e639590515e4ecc00afe9e1165bed305ccaa12c6c66870e4ddb38d"} Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.890491 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.905148 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=29.905129918 podStartE2EDuration="29.905129918s" podCreationTimestamp="2026-02-18 19:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:53:55.890803905 +0000 UTC m=+1199.472758750" watchObservedRunningTime="2026-02-18 19:53:55.905129918 +0000 UTC m=+1199.487084763" Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.927006 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-df7zx" event={"ID":"30efc86e-0c26-42e4-b907-1d4d985912ed","Type":"ContainerStarted","Data":"d60abba7265ba14494902810d1153e145d30148ef253f739d8bb7a9a9675f1f8"} Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.935738 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b4cfbdb9c-hwmr5" event={"ID":"4938c577-60aa-45c3-9190-b6e82bcf8b0d","Type":"ContainerStarted","Data":"c85e580ee020727173d28e445621bbce2289b58bcee15597e5fb5350c78183fd"} Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.938633 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" podStartSLOduration=29.938611763 podStartE2EDuration="29.938611763s" podCreationTimestamp="2026-02-18 19:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:53:55.923678595 +0000 UTC m=+1199.505633440" watchObservedRunningTime="2026-02-18 19:53:55.938611763 +0000 UTC m=+1199.520566608" Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.949730 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a956ae21-8721-4f0a-815f-acb82958ec28","Type":"ContainerStarted","Data":"68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41"} Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.949884 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a956ae21-8721-4f0a-815f-acb82958ec28" containerName="glance-log" containerID="cri-o://52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f" gracePeriod=30 Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.950631 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a956ae21-8721-4f0a-815f-acb82958ec28" containerName="glance-httpd" containerID="cri-o://68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41" gracePeriod=30 Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.954090 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-df7zx" podStartSLOduration=4.443230228 podStartE2EDuration="29.953994941s" podCreationTimestamp="2026-02-18 19:53:26 +0000 UTC" firstStartedPulling="2026-02-18 19:53:28.883897012 +0000 UTC m=+1172.465851847" lastFinishedPulling="2026-02-18 19:53:54.394661715 +0000 UTC m=+1197.976616560" observedRunningTime="2026-02-18 19:53:55.946795224 +0000 UTC m=+1199.528750069" watchObservedRunningTime="2026-02-18 19:53:55.953994941 +0000 UTC m=+1199.535949786" Feb 18 19:53:55 crc kubenswrapper[4932]: I0218 19:53:55.976886 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=28.976862654 podStartE2EDuration="28.976862654s" podCreationTimestamp="2026-02-18 19:53:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:53:55.970448326 +0000 UTC m=+1199.552403171" watchObservedRunningTime="2026-02-18 19:53:55.976862654 +0000 UTC m=+1199.558817499" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.702838 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.718200 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.718249 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.772307 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.830378 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a956ae21-8721-4f0a-815f-acb82958ec28-httpd-run\") pod \"a956ae21-8721-4f0a-815f-acb82958ec28\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.830427 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6z4dm\" (UniqueName: \"kubernetes.io/projected/a956ae21-8721-4f0a-815f-acb82958ec28-kube-api-access-6z4dm\") pod \"a956ae21-8721-4f0a-815f-acb82958ec28\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.830493 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-combined-ca-bundle\") pod \"a956ae21-8721-4f0a-815f-acb82958ec28\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.830578 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-scripts\") pod \"a956ae21-8721-4f0a-815f-acb82958ec28\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.830602 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-config-data\") pod \"a956ae21-8721-4f0a-815f-acb82958ec28\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.830624 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a956ae21-8721-4f0a-815f-acb82958ec28-logs\") pod \"a956ae21-8721-4f0a-815f-acb82958ec28\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.830681 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-public-tls-certs\") pod \"a956ae21-8721-4f0a-815f-acb82958ec28\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.830720 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"a956ae21-8721-4f0a-815f-acb82958ec28\" (UID: \"a956ae21-8721-4f0a-815f-acb82958ec28\") " Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.830841 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a956ae21-8721-4f0a-815f-acb82958ec28-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a956ae21-8721-4f0a-815f-acb82958ec28" (UID: "a956ae21-8721-4f0a-815f-acb82958ec28"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.831027 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a956ae21-8721-4f0a-815f-acb82958ec28-logs" (OuterVolumeSpecName: "logs") pod "a956ae21-8721-4f0a-815f-acb82958ec28" (UID: "a956ae21-8721-4f0a-815f-acb82958ec28"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.831152 4932 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a956ae21-8721-4f0a-815f-acb82958ec28-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.831195 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a956ae21-8721-4f0a-815f-acb82958ec28-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.838670 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "a956ae21-8721-4f0a-815f-acb82958ec28" (UID: "a956ae21-8721-4f0a-815f-acb82958ec28"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.839011 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-scripts" (OuterVolumeSpecName: "scripts") pod "a956ae21-8721-4f0a-815f-acb82958ec28" (UID: "a956ae21-8721-4f0a-815f-acb82958ec28"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.839121 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a956ae21-8721-4f0a-815f-acb82958ec28-kube-api-access-6z4dm" (OuterVolumeSpecName: "kube-api-access-6z4dm") pod "a956ae21-8721-4f0a-815f-acb82958ec28" (UID: "a956ae21-8721-4f0a-815f-acb82958ec28"). InnerVolumeSpecName "kube-api-access-6z4dm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.863918 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a956ae21-8721-4f0a-815f-acb82958ec28" (UID: "a956ae21-8721-4f0a-815f-acb82958ec28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.884992 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a956ae21-8721-4f0a-815f-acb82958ec28" (UID: "a956ae21-8721-4f0a-815f-acb82958ec28"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.898396 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-config-data" (OuterVolumeSpecName: "config-data") pod "a956ae21-8721-4f0a-815f-acb82958ec28" (UID: "a956ae21-8721-4f0a-815f-acb82958ec28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.932815 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.932857 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.932873 4932 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.932916 4932 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.932933 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6z4dm\" (UniqueName: \"kubernetes.io/projected/a956ae21-8721-4f0a-815f-acb82958ec28-kube-api-access-6z4dm\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.932946 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a956ae21-8721-4f0a-815f-acb82958ec28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.960431 4932 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.965516 4932 generic.go:334] "Generic (PLEG): container finished" podID="a956ae21-8721-4f0a-815f-acb82958ec28" containerID="68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41" exitCode=0 Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.965551 4932 generic.go:334] "Generic (PLEG): container finished" podID="a956ae21-8721-4f0a-815f-acb82958ec28" containerID="52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f" exitCode=143 Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.965598 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a956ae21-8721-4f0a-815f-acb82958ec28","Type":"ContainerDied","Data":"68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41"} Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.965628 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a956ae21-8721-4f0a-815f-acb82958ec28","Type":"ContainerDied","Data":"52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f"} Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.965644 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a956ae21-8721-4f0a-815f-acb82958ec28","Type":"ContainerDied","Data":"4b701277c70d3fe2d0fb203a8c7965ad1fb9d840ac2b0ced500b37db0043c874"} Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.965665 4932 scope.go:117] "RemoveContainer" containerID="68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.965794 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.974908 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67874d8bd5-ff7xc" event={"ID":"a620c48b-58fa-487f-8997-e2784ddc497b","Type":"ContainerStarted","Data":"e80cdd4378af4ac5d4d707a290fa639025fc55be34fd9af1c68a0bd06a7b10c3"} Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.974980 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67874d8bd5-ff7xc" event={"ID":"a620c48b-58fa-487f-8997-e2784ddc497b","Type":"ContainerStarted","Data":"97ecd324c61720be922083172bc1852b964c2ee86274e593e6ab59deb4006699"} Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.975194 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-67874d8bd5-ff7xc" podUID="a620c48b-58fa-487f-8997-e2784ddc497b" containerName="horizon-log" containerID="cri-o://97ecd324c61720be922083172bc1852b964c2ee86274e593e6ab59deb4006699" gracePeriod=30 Feb 18 19:53:56 crc kubenswrapper[4932]: I0218 19:53:56.975464 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-67874d8bd5-ff7xc" podUID="a620c48b-58fa-487f-8997-e2784ddc497b" containerName="horizon" containerID="cri-o://e80cdd4378af4ac5d4d707a290fa639025fc55be34fd9af1c68a0bd06a7b10c3" gracePeriod=30 Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.001501 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6877c868f8-jvwwn" event={"ID":"90dd0ecb-25a6-463a-a0d8-187c5c5478c5","Type":"ContainerStarted","Data":"ab0a233e7d39fe12bab8290499fd31156075ffd8db7b042097ce10342aa59916"} Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.003138 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6877c868f8-jvwwn" event={"ID":"90dd0ecb-25a6-463a-a0d8-187c5c5478c5","Type":"ContainerStarted","Data":"9682694caadc3019ce7876466d818429a2d77007240a66f539827566ad570483"} Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.004520 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-67874d8bd5-ff7xc" podStartSLOduration=4.810732069 podStartE2EDuration="31.00450299s" podCreationTimestamp="2026-02-18 19:53:26 +0000 UTC" firstStartedPulling="2026-02-18 19:53:28.392964783 +0000 UTC m=+1171.974919628" lastFinishedPulling="2026-02-18 19:53:54.586735704 +0000 UTC m=+1198.168690549" observedRunningTime="2026-02-18 19:53:57.002409688 +0000 UTC m=+1200.584364543" watchObservedRunningTime="2026-02-18 19:53:57.00450299 +0000 UTC m=+1200.586457835" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.010157 4932 generic.go:334] "Generic (PLEG): container finished" podID="efabc52d-6f3c-4442-9b80-09577d6d5ed7" containerID="21f540805a94ed439a7fc5568d03546bf5918b51410b050c0717633a77e5be9d" exitCode=143 Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.010282 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"efabc52d-6f3c-4442-9b80-09577d6d5ed7","Type":"ContainerDied","Data":"21f540805a94ed439a7fc5568d03546bf5918b51410b050c0717633a77e5be9d"} Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.012294 4932 generic.go:334] "Generic (PLEG): container finished" podID="4ef7f755-fa76-4e5c-8689-06727a6a9204" containerID="3485f1bd76a9fbf8fa572bdcacbfb0c9029328eeea0173e700694eb380d91d42" exitCode=0 Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.012317 4932 generic.go:334] "Generic (PLEG): container finished" podID="4ef7f755-fa76-4e5c-8689-06727a6a9204" containerID="e42b6cf62ee0a84f0660d6bd0e0803f31a5ed60ee2064a6fb1ff3db60b38d545" exitCode=143 Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.012394 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4ef7f755-fa76-4e5c-8689-06727a6a9204","Type":"ContainerDied","Data":"3485f1bd76a9fbf8fa572bdcacbfb0c9029328eeea0173e700694eb380d91d42"} Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.012455 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4ef7f755-fa76-4e5c-8689-06727a6a9204","Type":"ContainerDied","Data":"e42b6cf62ee0a84f0660d6bd0e0803f31a5ed60ee2064a6fb1ff3db60b38d545"} Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.017204 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75df984768-5mv9k" event={"ID":"dec0e208-2bfc-4661-8395-c56418bb9307","Type":"ContainerStarted","Data":"c14c2db9c2e97146ded5c1be64f375a20e4d3dc8027f2eb556b8226700b572e9"} Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.017228 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75df984768-5mv9k" event={"ID":"dec0e208-2bfc-4661-8395-c56418bb9307","Type":"ContainerStarted","Data":"8938c10b66b4f6d7e20437bee59ce3c16a7181c0a809f3e865b01b219862d8d7"} Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.027564 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6877c868f8-jvwwn" podStartSLOduration=21.027546067 podStartE2EDuration="21.027546067s" podCreationTimestamp="2026-02-18 19:53:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:53:57.019646303 +0000 UTC m=+1200.601601168" watchObservedRunningTime="2026-02-18 19:53:57.027546067 +0000 UTC m=+1200.609500912" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.034400 4932 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.043298 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"079e3d7d-bd4f-4198-8606-95192a514c07","Type":"ContainerStarted","Data":"c7ee5732776c18a927c72c5ff1cc708a0c4c7cbb7be39c25d6f15f19eb006153"} Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.056348 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b4cfbdb9c-hwmr5" event={"ID":"4938c577-60aa-45c3-9190-b6e82bcf8b0d","Type":"ContainerStarted","Data":"f5769d60f6e01bf4316e0a1d5902b22aaf988b784a78ee3cc62feeec1f37553a"} Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.056506 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b4cfbdb9c-hwmr5" podUID="4938c577-60aa-45c3-9190-b6e82bcf8b0d" containerName="horizon-log" containerID="cri-o://c85e580ee020727173d28e445621bbce2289b58bcee15597e5fb5350c78183fd" gracePeriod=30 Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.056698 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b4cfbdb9c-hwmr5" podUID="4938c577-60aa-45c3-9190-b6e82bcf8b0d" containerName="horizon" containerID="cri-o://f5769d60f6e01bf4316e0a1d5902b22aaf988b784a78ee3cc62feeec1f37553a" gracePeriod=30 Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.068221 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vldrp" event={"ID":"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd","Type":"ContainerStarted","Data":"e02396c72df7f91c2b9a6adb3ff52d02133d145e009ed0755b0356a1da74ee73"} Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.081220 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-644d9bbcf7-chs9h" event={"ID":"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf","Type":"ContainerStarted","Data":"a824fe0a64ae9746970f5bc8a389ffaa0e7b9eacf3d8dea3f2ebb12195def55c"} Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.081292 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-644d9bbcf7-chs9h" event={"ID":"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf","Type":"ContainerStarted","Data":"80df06be1d2a603214b5aa7b38525d904a38b1052555a7f95c74bc71722c9961"} Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.081389 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-644d9bbcf7-chs9h" podUID="a8b5aede-ac2c-4a2b-ba58-858c9046d8bf" containerName="horizon-log" containerID="cri-o://80df06be1d2a603214b5aa7b38525d904a38b1052555a7f95c74bc71722c9961" gracePeriod=30 Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.081662 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-644d9bbcf7-chs9h" podUID="a8b5aede-ac2c-4a2b-ba58-858c9046d8bf" containerName="horizon" containerID="cri-o://a824fe0a64ae9746970f5bc8a389ffaa0e7b9eacf3d8dea3f2ebb12195def55c" gracePeriod=30 Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.082096 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-75df984768-5mv9k" podStartSLOduration=21.082075969999998 podStartE2EDuration="21.08207597s" podCreationTimestamp="2026-02-18 19:53:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:53:57.044976406 +0000 UTC m=+1200.626931251" watchObservedRunningTime="2026-02-18 19:53:57.08207597 +0000 UTC m=+1200.664030815" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.082366 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.107003 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.127237 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.145033 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.146661 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.155454 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:53:57 crc kubenswrapper[4932]: E0218 19:53:57.155989 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a956ae21-8721-4f0a-815f-acb82958ec28" containerName="glance-httpd" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.156007 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="a956ae21-8721-4f0a-815f-acb82958ec28" containerName="glance-httpd" Feb 18 19:53:57 crc kubenswrapper[4932]: E0218 19:53:57.156021 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a956ae21-8721-4f0a-815f-acb82958ec28" containerName="glance-log" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.156028 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="a956ae21-8721-4f0a-815f-acb82958ec28" containerName="glance-log" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.156315 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="a956ae21-8721-4f0a-815f-acb82958ec28" containerName="glance-httpd" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.156342 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="a956ae21-8721-4f0a-815f-acb82958ec28" containerName="glance-log" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.168885 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.198426 4932 scope.go:117] "RemoveContainer" containerID="52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.199562 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.206763 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.210733 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5b4cfbdb9c-hwmr5" podStartSLOduration=5.652345942 podStartE2EDuration="31.210709067s" podCreationTimestamp="2026-02-18 19:53:26 +0000 UTC" firstStartedPulling="2026-02-18 19:53:28.873631559 +0000 UTC m=+1172.455586404" lastFinishedPulling="2026-02-18 19:53:54.431994674 +0000 UTC m=+1198.013949529" observedRunningTime="2026-02-18 19:53:57.096366852 +0000 UTC m=+1200.678321697" watchObservedRunningTime="2026-02-18 19:53:57.210709067 +0000 UTC m=+1200.792663912" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.266704 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a956ae21-8721-4f0a-815f-acb82958ec28" path="/var/lib/kubelet/pods/a956ae21-8721-4f0a-815f-acb82958ec28/volumes" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.267583 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.278483 4932 scope.go:117] "RemoveContainer" containerID="68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41" Feb 18 19:53:57 crc kubenswrapper[4932]: E0218 19:53:57.285482 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41\": container with ID starting with 68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41 not found: ID does not exist" containerID="68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.285526 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41"} err="failed to get container status \"68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41\": rpc error: code = NotFound desc = could not find container \"68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41\": container with ID starting with 68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41 not found: ID does not exist" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.285551 4932 scope.go:117] "RemoveContainer" containerID="52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.289394 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.291425 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vldrp" podStartSLOduration=4.291405574 podStartE2EDuration="4.291405574s" podCreationTimestamp="2026-02-18 19:53:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:53:57.120594548 +0000 UTC m=+1200.702549393" watchObservedRunningTime="2026-02-18 19:53:57.291405574 +0000 UTC m=+1200.873360429" Feb 18 19:53:57 crc kubenswrapper[4932]: E0218 19:53:57.300511 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f\": container with ID starting with 52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f not found: ID does not exist" containerID="52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.300588 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f"} err="failed to get container status \"52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f\": rpc error: code = NotFound desc = could not find container \"52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f\": container with ID starting with 52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f not found: ID does not exist" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.300615 4932 scope.go:117] "RemoveContainer" containerID="68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.301361 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41"} err="failed to get container status \"68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41\": rpc error: code = NotFound desc = could not find container \"68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41\": container with ID starting with 68db5d186a62dee61446e9e5db32e8987a07c7bdc8e597023e93136d466b4e41 not found: ID does not exist" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.301378 4932 scope.go:117] "RemoveContainer" containerID="52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.303092 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f"} err="failed to get container status \"52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f\": rpc error: code = NotFound desc = could not find container \"52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f\": container with ID starting with 52b16c6d849756b15eb3f5cc7efc1a745db51fd1c1701d86fe5c43a1d41da03f not found: ID does not exist" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.315701 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-644d9bbcf7-chs9h" podStartSLOduration=7.698692835 podStartE2EDuration="28.315684232s" podCreationTimestamp="2026-02-18 19:53:29 +0000 UTC" firstStartedPulling="2026-02-18 19:53:34.827381645 +0000 UTC m=+1178.409336490" lastFinishedPulling="2026-02-18 19:53:55.444373042 +0000 UTC m=+1199.026327887" observedRunningTime="2026-02-18 19:53:57.158589664 +0000 UTC m=+1200.740544509" watchObservedRunningTime="2026-02-18 19:53:57.315684232 +0000 UTC m=+1200.897639077" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.351450 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.351481 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bxxv\" (UniqueName: \"kubernetes.io/projected/67750e31-ed62-4908-9b56-3a46be936224-kube-api-access-2bxxv\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.351575 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.351625 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-scripts\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.351655 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-config-data\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.353372 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67750e31-ed62-4908-9b56-3a46be936224-logs\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.353440 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.353487 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67750e31-ed62-4908-9b56-3a46be936224-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.372039 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.450429 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.450472 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.456271 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-config-data\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.456339 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67750e31-ed62-4908-9b56-3a46be936224-logs\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.456370 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.456398 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67750e31-ed62-4908-9b56-3a46be936224-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.456475 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.456492 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bxxv\" (UniqueName: \"kubernetes.io/projected/67750e31-ed62-4908-9b56-3a46be936224-kube-api-access-2bxxv\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.456548 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.456587 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-scripts\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.456964 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67750e31-ed62-4908-9b56-3a46be936224-logs\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.458369 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.458945 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67750e31-ed62-4908-9b56-3a46be936224-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.467286 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-config-data\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.468395 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.471625 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-scripts\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.475429 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.481469 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.485859 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bxxv\" (UniqueName: \"kubernetes.io/projected/67750e31-ed62-4908-9b56-3a46be936224-kube-api-access-2bxxv\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.510513 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.522748 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.546255 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.666513 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.759822 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-config-data\") pod \"4ef7f755-fa76-4e5c-8689-06727a6a9204\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.759892 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6d7ll\" (UniqueName: \"kubernetes.io/projected/4ef7f755-fa76-4e5c-8689-06727a6a9204-kube-api-access-6d7ll\") pod \"4ef7f755-fa76-4e5c-8689-06727a6a9204\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.759940 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ef7f755-fa76-4e5c-8689-06727a6a9204-httpd-run\") pod \"4ef7f755-fa76-4e5c-8689-06727a6a9204\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.760049 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ef7f755-fa76-4e5c-8689-06727a6a9204-logs\") pod \"4ef7f755-fa76-4e5c-8689-06727a6a9204\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.760078 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-internal-tls-certs\") pod \"4ef7f755-fa76-4e5c-8689-06727a6a9204\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.760145 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-scripts\") pod \"4ef7f755-fa76-4e5c-8689-06727a6a9204\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.760194 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"4ef7f755-fa76-4e5c-8689-06727a6a9204\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.760302 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-combined-ca-bundle\") pod \"4ef7f755-fa76-4e5c-8689-06727a6a9204\" (UID: \"4ef7f755-fa76-4e5c-8689-06727a6a9204\") " Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.763518 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ef7f755-fa76-4e5c-8689-06727a6a9204-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4ef7f755-fa76-4e5c-8689-06727a6a9204" (UID: "4ef7f755-fa76-4e5c-8689-06727a6a9204"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.763543 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ef7f755-fa76-4e5c-8689-06727a6a9204-logs" (OuterVolumeSpecName: "logs") pod "4ef7f755-fa76-4e5c-8689-06727a6a9204" (UID: "4ef7f755-fa76-4e5c-8689-06727a6a9204"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.770397 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ef7f755-fa76-4e5c-8689-06727a6a9204-kube-api-access-6d7ll" (OuterVolumeSpecName: "kube-api-access-6d7ll") pod "4ef7f755-fa76-4e5c-8689-06727a6a9204" (UID: "4ef7f755-fa76-4e5c-8689-06727a6a9204"). InnerVolumeSpecName "kube-api-access-6d7ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.779579 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-scripts" (OuterVolumeSpecName: "scripts") pod "4ef7f755-fa76-4e5c-8689-06727a6a9204" (UID: "4ef7f755-fa76-4e5c-8689-06727a6a9204"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.783202 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "4ef7f755-fa76-4e5c-8689-06727a6a9204" (UID: "4ef7f755-fa76-4e5c-8689-06727a6a9204"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.833544 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4ef7f755-fa76-4e5c-8689-06727a6a9204" (UID: "4ef7f755-fa76-4e5c-8689-06727a6a9204"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.862506 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.862580 4932 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.862595 4932 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ef7f755-fa76-4e5c-8689-06727a6a9204-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.862606 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6d7ll\" (UniqueName: \"kubernetes.io/projected/4ef7f755-fa76-4e5c-8689-06727a6a9204-kube-api-access-6d7ll\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.862620 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ef7f755-fa76-4e5c-8689-06727a6a9204-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.862632 4932 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.865409 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-config-data" (OuterVolumeSpecName: "config-data") pod "4ef7f755-fa76-4e5c-8689-06727a6a9204" (UID: "4ef7f755-fa76-4e5c-8689-06727a6a9204"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.865461 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ef7f755-fa76-4e5c-8689-06727a6a9204" (UID: "4ef7f755-fa76-4e5c-8689-06727a6a9204"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.876960 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.908243 4932 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.970622 4932 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.970661 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:57 crc kubenswrapper[4932]: I0218 19:53:57.970670 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ef7f755-fa76-4e5c-8689-06727a6a9204-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.094393 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.095734 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4ef7f755-fa76-4e5c-8689-06727a6a9204","Type":"ContainerDied","Data":"7726c4f68af3477b632315682a36e3711c9d3bff8965ae81fe2c0dd5455b7980"} Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.095776 4932 scope.go:117] "RemoveContainer" containerID="3485f1bd76a9fbf8fa572bdcacbfb0c9029328eeea0173e700694eb380d91d42" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.105244 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.188948 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.216110 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.241829 4932 scope.go:117] "RemoveContainer" containerID="e42b6cf62ee0a84f0660d6bd0e0803f31a5ed60ee2064a6fb1ff3db60b38d545" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.246885 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.306417 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:53:58 crc kubenswrapper[4932]: E0218 19:53:58.307619 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef7f755-fa76-4e5c-8689-06727a6a9204" containerName="glance-httpd" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.307656 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef7f755-fa76-4e5c-8689-06727a6a9204" containerName="glance-httpd" Feb 18 19:53:58 crc kubenswrapper[4932]: E0218 19:53:58.307716 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef7f755-fa76-4e5c-8689-06727a6a9204" containerName="glance-log" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.307726 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef7f755-fa76-4e5c-8689-06727a6a9204" containerName="glance-log" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.308405 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ef7f755-fa76-4e5c-8689-06727a6a9204" containerName="glance-log" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.308442 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ef7f755-fa76-4e5c-8689-06727a6a9204" containerName="glance-httpd" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.335659 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.335816 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.338663 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.340521 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.428667 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.503891 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.504190 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.504356 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.504431 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdfd208a-d781-4471-aa15-5fcbb592ec07-logs\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.504712 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.504740 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4vtw\" (UniqueName: \"kubernetes.io/projected/bdfd208a-d781-4471-aa15-5fcbb592ec07-kube-api-access-m4vtw\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.504771 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bdfd208a-d781-4471-aa15-5fcbb592ec07-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.504828 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.551820 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.607297 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.607360 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.607384 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.607407 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdfd208a-d781-4471-aa15-5fcbb592ec07-logs\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.607500 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.607528 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4vtw\" (UniqueName: \"kubernetes.io/projected/bdfd208a-d781-4471-aa15-5fcbb592ec07-kube-api-access-m4vtw\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.607554 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bdfd208a-d781-4471-aa15-5fcbb592ec07-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.607585 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.607681 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.610650 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bdfd208a-d781-4471-aa15-5fcbb592ec07-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.611718 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdfd208a-d781-4471-aa15-5fcbb592ec07-logs\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.621424 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.624973 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.625066 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.626046 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.644310 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4vtw\" (UniqueName: \"kubernetes.io/projected/bdfd208a-d781-4471-aa15-5fcbb592ec07-kube-api-access-m4vtw\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.679258 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:53:58 crc kubenswrapper[4932]: I0218 19:53:58.687958 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:53:59 crc kubenswrapper[4932]: I0218 19:53:59.134244 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67750e31-ed62-4908-9b56-3a46be936224","Type":"ContainerStarted","Data":"49ded8c61eff3d7eb04054517499be8ecf50df374bdd44a32ed528213544141a"} Feb 18 19:53:59 crc kubenswrapper[4932]: I0218 19:53:59.135578 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="96fe12c6-435c-4ef9-a340-c15cd050d898" containerName="watcher-decision-engine" containerID="cri-o://dbdae9f53819c07d29e95823430d3cc7a7fe94e92688f6b0895ae6c060733453" gracePeriod=30 Feb 18 19:53:59 crc kubenswrapper[4932]: I0218 19:53:59.208339 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ef7f755-fa76-4e5c-8689-06727a6a9204" path="/var/lib/kubelet/pods/4ef7f755-fa76-4e5c-8689-06727a6a9204/volumes" Feb 18 19:53:59 crc kubenswrapper[4932]: I0218 19:53:59.366352 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:53:59 crc kubenswrapper[4932]: I0218 19:53:59.743673 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.001204 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="efabc52d-6f3c-4442-9b80-09577d6d5ed7" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.150:9322/\": read tcp 10.217.0.2:38566->10.217.0.150:9322: read: connection reset by peer" Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.168475 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67750e31-ed62-4908-9b56-3a46be936224","Type":"ContainerStarted","Data":"58449f068ea443fd840aa17c5a640ee0e5ae861f046a6ea06594d638db518b63"} Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.175047 4932 generic.go:334] "Generic (PLEG): container finished" podID="efabc52d-6f3c-4442-9b80-09577d6d5ed7" containerID="3bb5786715d4653ff11b29e662c3a16b899ce26d5a3ffbce47843577ab6828a2" exitCode=0 Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.175101 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"efabc52d-6f3c-4442-9b80-09577d6d5ed7","Type":"ContainerDied","Data":"3bb5786715d4653ff11b29e662c3a16b899ce26d5a3ffbce47843577ab6828a2"} Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.182510 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-applier-0" podUID="5bd90883-79db-4903-87ab-828b9608f9fa" containerName="watcher-applier" containerID="cri-o://fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" gracePeriod=30 Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.182654 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bdfd208a-d781-4471-aa15-5fcbb592ec07","Type":"ContainerStarted","Data":"1fd189f5734df90d29419c8abecc4af71db32a09c9c7fb47958213aa32db2369"} Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.356263 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.477493 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrlj7\" (UniqueName: \"kubernetes.io/projected/efabc52d-6f3c-4442-9b80-09577d6d5ed7-kube-api-access-nrlj7\") pod \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.477536 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-custom-prometheus-ca\") pod \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.477565 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-config-data\") pod \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.477697 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efabc52d-6f3c-4442-9b80-09577d6d5ed7-logs\") pod \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.477743 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-combined-ca-bundle\") pod \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\" (UID: \"efabc52d-6f3c-4442-9b80-09577d6d5ed7\") " Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.484664 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efabc52d-6f3c-4442-9b80-09577d6d5ed7-logs" (OuterVolumeSpecName: "logs") pod "efabc52d-6f3c-4442-9b80-09577d6d5ed7" (UID: "efabc52d-6f3c-4442-9b80-09577d6d5ed7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.508557 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efabc52d-6f3c-4442-9b80-09577d6d5ed7-kube-api-access-nrlj7" (OuterVolumeSpecName: "kube-api-access-nrlj7") pod "efabc52d-6f3c-4442-9b80-09577d6d5ed7" (UID: "efabc52d-6f3c-4442-9b80-09577d6d5ed7"). InnerVolumeSpecName "kube-api-access-nrlj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.525356 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "efabc52d-6f3c-4442-9b80-09577d6d5ed7" (UID: "efabc52d-6f3c-4442-9b80-09577d6d5ed7"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.536398 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "efabc52d-6f3c-4442-9b80-09577d6d5ed7" (UID: "efabc52d-6f3c-4442-9b80-09577d6d5ed7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.579932 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efabc52d-6f3c-4442-9b80-09577d6d5ed7-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.579963 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.579974 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrlj7\" (UniqueName: \"kubernetes.io/projected/efabc52d-6f3c-4442-9b80-09577d6d5ed7-kube-api-access-nrlj7\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.579984 4932 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.606100 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-config-data" (OuterVolumeSpecName: "config-data") pod "efabc52d-6f3c-4442-9b80-09577d6d5ed7" (UID: "efabc52d-6f3c-4442-9b80-09577d6d5ed7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:00 crc kubenswrapper[4932]: I0218 19:54:00.682453 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efabc52d-6f3c-4442-9b80-09577d6d5ed7-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.200547 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67750e31-ed62-4908-9b56-3a46be936224","Type":"ContainerStarted","Data":"fd324d05ae668c3f684220e361c41b6ff46379462c08ea7c413014fe4a371e37"} Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.208199 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"efabc52d-6f3c-4442-9b80-09577d6d5ed7","Type":"ContainerDied","Data":"2d6fd36cf5810909c88050cc20c15f847b1b0069bc0b2e13fc22cf63d5c5c033"} Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.208244 4932 scope.go:117] "RemoveContainer" containerID="3bb5786715d4653ff11b29e662c3a16b899ce26d5a3ffbce47843577ab6828a2" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.208325 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.216907 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bdfd208a-d781-4471-aa15-5fcbb592ec07","Type":"ContainerStarted","Data":"99077386a5dc37e2145b33681651b019f28beed715374edd046c2366a76b2af6"} Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.216936 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bdfd208a-d781-4471-aa15-5fcbb592ec07","Type":"ContainerStarted","Data":"ec4505e85a78c60e725484af01a4d51a03ebf66c4a5ad9b030f60b812e85e4e3"} Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.224767 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.224755 podStartE2EDuration="4.224755s" podCreationTimestamp="2026-02-18 19:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:01.221325546 +0000 UTC m=+1204.803280391" watchObservedRunningTime="2026-02-18 19:54:01.224755 +0000 UTC m=+1204.806709845" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.259614 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.259595878 podStartE2EDuration="3.259595878s" podCreationTimestamp="2026-02-18 19:53:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:01.246822814 +0000 UTC m=+1204.828777659" watchObservedRunningTime="2026-02-18 19:54:01.259595878 +0000 UTC m=+1204.841550723" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.273954 4932 scope.go:117] "RemoveContainer" containerID="21f540805a94ed439a7fc5568d03546bf5918b51410b050c0717633a77e5be9d" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.297827 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.339191 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.371551 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:54:01 crc kubenswrapper[4932]: E0218 19:54:01.372008 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efabc52d-6f3c-4442-9b80-09577d6d5ed7" containerName="watcher-api-log" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.372026 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="efabc52d-6f3c-4442-9b80-09577d6d5ed7" containerName="watcher-api-log" Feb 18 19:54:01 crc kubenswrapper[4932]: E0218 19:54:01.372039 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efabc52d-6f3c-4442-9b80-09577d6d5ed7" containerName="watcher-api" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.372045 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="efabc52d-6f3c-4442-9b80-09577d6d5ed7" containerName="watcher-api" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.372293 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="efabc52d-6f3c-4442-9b80-09577d6d5ed7" containerName="watcher-api-log" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.372311 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="efabc52d-6f3c-4442-9b80-09577d6d5ed7" containerName="watcher-api" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.373272 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.375771 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.380721 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.508107 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " pod="openstack/watcher-api-0" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.508564 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58b1eaea-5735-4c71-9c13-83bbece4cb4a-logs\") pod \"watcher-api-0\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " pod="openstack/watcher-api-0" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.508618 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-config-data\") pod \"watcher-api-0\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " pod="openstack/watcher-api-0" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.508744 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fdcl\" (UniqueName: \"kubernetes.io/projected/58b1eaea-5735-4c71-9c13-83bbece4cb4a-kube-api-access-5fdcl\") pod \"watcher-api-0\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " pod="openstack/watcher-api-0" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.508780 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " pod="openstack/watcher-api-0" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.610414 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fdcl\" (UniqueName: \"kubernetes.io/projected/58b1eaea-5735-4c71-9c13-83bbece4cb4a-kube-api-access-5fdcl\") pod \"watcher-api-0\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " pod="openstack/watcher-api-0" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.610471 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " pod="openstack/watcher-api-0" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.610521 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " pod="openstack/watcher-api-0" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.610592 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58b1eaea-5735-4c71-9c13-83bbece4cb4a-logs\") pod \"watcher-api-0\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " pod="openstack/watcher-api-0" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.610647 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-config-data\") pod \"watcher-api-0\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " pod="openstack/watcher-api-0" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.611312 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58b1eaea-5735-4c71-9c13-83bbece4cb4a-logs\") pod \"watcher-api-0\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " pod="openstack/watcher-api-0" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.615476 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-config-data\") pod \"watcher-api-0\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " pod="openstack/watcher-api-0" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.615787 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " pod="openstack/watcher-api-0" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.631712 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fdcl\" (UniqueName: \"kubernetes.io/projected/58b1eaea-5735-4c71-9c13-83bbece4cb4a-kube-api-access-5fdcl\") pod \"watcher-api-0\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " pod="openstack/watcher-api-0" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.635327 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " pod="openstack/watcher-api-0" Feb 18 19:54:01 crc kubenswrapper[4932]: I0218 19:54:01.696982 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:54:02 crc kubenswrapper[4932]: I0218 19:54:02.238825 4932 generic.go:334] "Generic (PLEG): container finished" podID="300b7bcb-1caa-440a-88bc-dc2c4e3b43cd" containerID="e02396c72df7f91c2b9a6adb3ff52d02133d145e009ed0755b0356a1da74ee73" exitCode=0 Feb 18 19:54:02 crc kubenswrapper[4932]: I0218 19:54:02.238912 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vldrp" event={"ID":"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd","Type":"ContainerDied","Data":"e02396c72df7f91c2b9a6adb3ff52d02133d145e009ed0755b0356a1da74ee73"} Feb 18 19:54:02 crc kubenswrapper[4932]: I0218 19:54:02.289293 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:54:02 crc kubenswrapper[4932]: E0218 19:54:02.453790 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:02 crc kubenswrapper[4932]: E0218 19:54:02.455585 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:02 crc kubenswrapper[4932]: E0218 19:54:02.456584 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:02 crc kubenswrapper[4932]: E0218 19:54:02.456614 4932 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="5bd90883-79db-4903-87ab-828b9608f9fa" containerName="watcher-applier" Feb 18 19:54:02 crc kubenswrapper[4932]: I0218 19:54:02.979351 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:54:03 crc kubenswrapper[4932]: I0218 19:54:03.061477 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d589bd999-klfsc"] Feb 18 19:54:03 crc kubenswrapper[4932]: I0218 19:54:03.062030 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d589bd999-klfsc" podUID="93b88bfc-e293-4af3-a085-184607bf9327" containerName="dnsmasq-dns" containerID="cri-o://f52951b30b5592f2aeb5eae2773bb2ba20887b8705143fd09cf41ec26c0f786e" gracePeriod=10 Feb 18 19:54:03 crc kubenswrapper[4932]: I0218 19:54:03.199629 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efabc52d-6f3c-4442-9b80-09577d6d5ed7" path="/var/lib/kubelet/pods/efabc52d-6f3c-4442-9b80-09577d6d5ed7/volumes" Feb 18 19:54:03 crc kubenswrapper[4932]: I0218 19:54:03.251292 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"58b1eaea-5735-4c71-9c13-83bbece4cb4a","Type":"ContainerStarted","Data":"914e01d59497860d56f1ffb2b4e5a0d0f4b154e5f7add6b40136f9f6dda7044e"} Feb 18 19:54:03 crc kubenswrapper[4932]: I0218 19:54:03.251331 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"58b1eaea-5735-4c71-9c13-83bbece4cb4a","Type":"ContainerStarted","Data":"f81f3e519e272ec341248a6b7ba9a38b40c5833968d66d807fd43af06ff4634a"} Feb 18 19:54:04 crc kubenswrapper[4932]: I0218 19:54:04.261550 4932 generic.go:334] "Generic (PLEG): container finished" podID="30efc86e-0c26-42e4-b907-1d4d985912ed" containerID="d60abba7265ba14494902810d1153e145d30148ef253f739d8bb7a9a9675f1f8" exitCode=0 Feb 18 19:54:04 crc kubenswrapper[4932]: I0218 19:54:04.261786 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-df7zx" event={"ID":"30efc86e-0c26-42e4-b907-1d4d985912ed","Type":"ContainerDied","Data":"d60abba7265ba14494902810d1153e145d30148ef253f739d8bb7a9a9675f1f8"} Feb 18 19:54:04 crc kubenswrapper[4932]: I0218 19:54:04.265341 4932 generic.go:334] "Generic (PLEG): container finished" podID="93b88bfc-e293-4af3-a085-184607bf9327" containerID="f52951b30b5592f2aeb5eae2773bb2ba20887b8705143fd09cf41ec26c0f786e" exitCode=0 Feb 18 19:54:04 crc kubenswrapper[4932]: I0218 19:54:04.265399 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d589bd999-klfsc" event={"ID":"93b88bfc-e293-4af3-a085-184607bf9327","Type":"ContainerDied","Data":"f52951b30b5592f2aeb5eae2773bb2ba20887b8705143fd09cf41ec26c0f786e"} Feb 18 19:54:04 crc kubenswrapper[4932]: I0218 19:54:04.270476 4932 generic.go:334] "Generic (PLEG): container finished" podID="96fe12c6-435c-4ef9-a340-c15cd050d898" containerID="dbdae9f53819c07d29e95823430d3cc7a7fe94e92688f6b0895ae6c060733453" exitCode=1 Feb 18 19:54:04 crc kubenswrapper[4932]: I0218 19:54:04.270523 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"96fe12c6-435c-4ef9-a340-c15cd050d898","Type":"ContainerDied","Data":"dbdae9f53819c07d29e95823430d3cc7a7fe94e92688f6b0895ae6c060733453"} Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.758922 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.759369 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.761608 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-75df984768-5mv9k" podUID="dec0e208-2bfc-4661-8395-c56418bb9307" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.164:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.164:8443: connect: connection refused" Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.810253 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.810302 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.827977 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6877c868f8-jvwwn" podUID="90dd0ecb-25a6-463a-a0d8-187c5c5478c5" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.165:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.165:8443: connect: connection refused" Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.837549 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-df7zx" Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.847064 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.859583 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.925285 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-combined-ca-bundle\") pod \"96fe12c6-435c-4ef9-a340-c15cd050d898\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.925337 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-credential-keys\") pod \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.925408 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-custom-prometheus-ca\") pod \"96fe12c6-435c-4ef9-a340-c15cd050d898\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.925450 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-config-data\") pod \"30efc86e-0c26-42e4-b907-1d4d985912ed\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.925498 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-scripts\") pod \"30efc86e-0c26-42e4-b907-1d4d985912ed\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.925522 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30efc86e-0c26-42e4-b907-1d4d985912ed-logs\") pod \"30efc86e-0c26-42e4-b907-1d4d985912ed\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.925544 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l4dr\" (UniqueName: \"kubernetes.io/projected/30efc86e-0c26-42e4-b907-1d4d985912ed-kube-api-access-5l4dr\") pod \"30efc86e-0c26-42e4-b907-1d4d985912ed\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.925577 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-config-data\") pod \"96fe12c6-435c-4ef9-a340-c15cd050d898\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.925597 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-combined-ca-bundle\") pod \"30efc86e-0c26-42e4-b907-1d4d985912ed\" (UID: \"30efc86e-0c26-42e4-b907-1d4d985912ed\") " Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.925659 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wv8fx\" (UniqueName: \"kubernetes.io/projected/96fe12c6-435c-4ef9-a340-c15cd050d898-kube-api-access-wv8fx\") pod \"96fe12c6-435c-4ef9-a340-c15cd050d898\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.925702 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-fernet-keys\") pod \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.925733 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96fe12c6-435c-4ef9-a340-c15cd050d898-logs\") pod \"96fe12c6-435c-4ef9-a340-c15cd050d898\" (UID: \"96fe12c6-435c-4ef9-a340-c15cd050d898\") " Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.925796 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k42qv\" (UniqueName: \"kubernetes.io/projected/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-kube-api-access-k42qv\") pod \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.925821 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-scripts\") pod \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.925841 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-combined-ca-bundle\") pod \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.925913 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-config-data\") pod \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\" (UID: \"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd\") " Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.927395 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96fe12c6-435c-4ef9-a340-c15cd050d898-logs" (OuterVolumeSpecName: "logs") pod "96fe12c6-435c-4ef9-a340-c15cd050d898" (UID: "96fe12c6-435c-4ef9-a340-c15cd050d898"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.937582 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30efc86e-0c26-42e4-b907-1d4d985912ed-logs" (OuterVolumeSpecName: "logs") pod "30efc86e-0c26-42e4-b907-1d4d985912ed" (UID: "30efc86e-0c26-42e4-b907-1d4d985912ed"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.950810 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "300b7bcb-1caa-440a-88bc-dc2c4e3b43cd" (UID: "300b7bcb-1caa-440a-88bc-dc2c4e3b43cd"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.954425 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96fe12c6-435c-4ef9-a340-c15cd050d898-kube-api-access-wv8fx" (OuterVolumeSpecName: "kube-api-access-wv8fx") pod "96fe12c6-435c-4ef9-a340-c15cd050d898" (UID: "96fe12c6-435c-4ef9-a340-c15cd050d898"). InnerVolumeSpecName "kube-api-access-wv8fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.955593 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-scripts" (OuterVolumeSpecName: "scripts") pod "300b7bcb-1caa-440a-88bc-dc2c4e3b43cd" (UID: "300b7bcb-1caa-440a-88bc-dc2c4e3b43cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.958071 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30efc86e-0c26-42e4-b907-1d4d985912ed-kube-api-access-5l4dr" (OuterVolumeSpecName: "kube-api-access-5l4dr") pod "30efc86e-0c26-42e4-b907-1d4d985912ed" (UID: "30efc86e-0c26-42e4-b907-1d4d985912ed"). InnerVolumeSpecName "kube-api-access-5l4dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.962560 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-kube-api-access-k42qv" (OuterVolumeSpecName: "kube-api-access-k42qv") pod "300b7bcb-1caa-440a-88bc-dc2c4e3b43cd" (UID: "300b7bcb-1caa-440a-88bc-dc2c4e3b43cd"). InnerVolumeSpecName "kube-api-access-k42qv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:06 crc kubenswrapper[4932]: I0218 19:54:06.979222 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-scripts" (OuterVolumeSpecName: "scripts") pod "30efc86e-0c26-42e4-b907-1d4d985912ed" (UID: "30efc86e-0c26-42e4-b907-1d4d985912ed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.038381 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "300b7bcb-1caa-440a-88bc-dc2c4e3b43cd" (UID: "300b7bcb-1caa-440a-88bc-dc2c4e3b43cd"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.040832 4932 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.041347 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.076935 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30efc86e-0c26-42e4-b907-1d4d985912ed-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.077373 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l4dr\" (UniqueName: \"kubernetes.io/projected/30efc86e-0c26-42e4-b907-1d4d985912ed-kube-api-access-5l4dr\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.077493 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wv8fx\" (UniqueName: \"kubernetes.io/projected/96fe12c6-435c-4ef9-a340-c15cd050d898-kube-api-access-wv8fx\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.077589 4932 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.077644 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96fe12c6-435c-4ef9-a340-c15cd050d898-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.077702 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k42qv\" (UniqueName: \"kubernetes.io/projected/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-kube-api-access-k42qv\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.077753 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.040905 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-config-data" (OuterVolumeSpecName: "config-data") pod "30efc86e-0c26-42e4-b907-1d4d985912ed" (UID: "30efc86e-0c26-42e4-b907-1d4d985912ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.086602 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-config-data" (OuterVolumeSpecName: "config-data") pod "300b7bcb-1caa-440a-88bc-dc2c4e3b43cd" (UID: "300b7bcb-1caa-440a-88bc-dc2c4e3b43cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.088419 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "300b7bcb-1caa-440a-88bc-dc2c4e3b43cd" (UID: "300b7bcb-1caa-440a-88bc-dc2c4e3b43cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.096562 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.104930 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30efc86e-0c26-42e4-b907-1d4d985912ed" (UID: "30efc86e-0c26-42e4-b907-1d4d985912ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.141712 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96fe12c6-435c-4ef9-a340-c15cd050d898" (UID: "96fe12c6-435c-4ef9-a340-c15cd050d898"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.148898 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-config-data" (OuterVolumeSpecName: "config-data") pod "96fe12c6-435c-4ef9-a340-c15cd050d898" (UID: "96fe12c6-435c-4ef9-a340-c15cd050d898"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.150845 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "96fe12c6-435c-4ef9-a340-c15cd050d898" (UID: "96fe12c6-435c-4ef9-a340-c15cd050d898"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.180514 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-dns-svc\") pod \"93b88bfc-e293-4af3-a085-184607bf9327\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.180599 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-config\") pod \"93b88bfc-e293-4af3-a085-184607bf9327\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.180695 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2b4k\" (UniqueName: \"kubernetes.io/projected/93b88bfc-e293-4af3-a085-184607bf9327-kube-api-access-j2b4k\") pod \"93b88bfc-e293-4af3-a085-184607bf9327\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.180807 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-ovsdbserver-nb\") pod \"93b88bfc-e293-4af3-a085-184607bf9327\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.180874 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-ovsdbserver-sb\") pod \"93b88bfc-e293-4af3-a085-184607bf9327\" (UID: \"93b88bfc-e293-4af3-a085-184607bf9327\") " Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.181429 4932 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.181492 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.181502 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.181510 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30efc86e-0c26-42e4-b907-1d4d985912ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.181518 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.181526 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.181536 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96fe12c6-435c-4ef9-a340-c15cd050d898-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.199783 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93b88bfc-e293-4af3-a085-184607bf9327-kube-api-access-j2b4k" (OuterVolumeSpecName: "kube-api-access-j2b4k") pod "93b88bfc-e293-4af3-a085-184607bf9327" (UID: "93b88bfc-e293-4af3-a085-184607bf9327"). InnerVolumeSpecName "kube-api-access-j2b4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.283445 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2b4k\" (UniqueName: \"kubernetes.io/projected/93b88bfc-e293-4af3-a085-184607bf9327-kube-api-access-j2b4k\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.299392 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"96fe12c6-435c-4ef9-a340-c15cd050d898","Type":"ContainerDied","Data":"50a26765d82393ad4f763879251ca7f0c251c1c50f74af99544b44224a950233"} Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.299451 4932 scope.go:117] "RemoveContainer" containerID="dbdae9f53819c07d29e95823430d3cc7a7fe94e92688f6b0895ae6c060733453" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.299588 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.308034 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-df7zx" event={"ID":"30efc86e-0c26-42e4-b907-1d4d985912ed","Type":"ContainerDied","Data":"7657bac55ccacde4594557141b7b117e70c960cb0019ec4ad053450683538da6"} Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.308077 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7657bac55ccacde4594557141b7b117e70c960cb0019ec4ad053450683538da6" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.308459 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-df7zx" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.322072 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vldrp" event={"ID":"300b7bcb-1caa-440a-88bc-dc2c4e3b43cd","Type":"ContainerDied","Data":"8e36575f312ac74d40b63b16208afa722288494a47294670fd9808ea408dc232"} Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.322112 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e36575f312ac74d40b63b16208afa722288494a47294670fd9808ea408dc232" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.322079 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vldrp" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.336716 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.352481 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cpzcj" event={"ID":"43f771cb-173f-4939-b1d1-e7d1b21834cb","Type":"ContainerStarted","Data":"49092cff964806110781a1ce6f40a2126d58bcb45c2544f984759951802714c3"} Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.366932 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.384784 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"58b1eaea-5735-4c71-9c13-83bbece4cb4a","Type":"ContainerStarted","Data":"7e4457ed87ab79af627e36e59e08bc3082309f2331d169ed0c87ae852f7b68d1"} Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.385421 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.385450 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:54:07 crc kubenswrapper[4932]: E0218 19:54:07.385817 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30efc86e-0c26-42e4-b907-1d4d985912ed" containerName="placement-db-sync" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.385832 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="30efc86e-0c26-42e4-b907-1d4d985912ed" containerName="placement-db-sync" Feb 18 19:54:07 crc kubenswrapper[4932]: E0218 19:54:07.385849 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96fe12c6-435c-4ef9-a340-c15cd050d898" containerName="watcher-decision-engine" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.385854 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="96fe12c6-435c-4ef9-a340-c15cd050d898" containerName="watcher-decision-engine" Feb 18 19:54:07 crc kubenswrapper[4932]: E0218 19:54:07.385865 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93b88bfc-e293-4af3-a085-184607bf9327" containerName="init" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.385871 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="93b88bfc-e293-4af3-a085-184607bf9327" containerName="init" Feb 18 19:54:07 crc kubenswrapper[4932]: E0218 19:54:07.385886 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="300b7bcb-1caa-440a-88bc-dc2c4e3b43cd" containerName="keystone-bootstrap" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.385892 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="300b7bcb-1caa-440a-88bc-dc2c4e3b43cd" containerName="keystone-bootstrap" Feb 18 19:54:07 crc kubenswrapper[4932]: E0218 19:54:07.385907 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93b88bfc-e293-4af3-a085-184607bf9327" containerName="dnsmasq-dns" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.385913 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="93b88bfc-e293-4af3-a085-184607bf9327" containerName="dnsmasq-dns" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.386087 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="300b7bcb-1caa-440a-88bc-dc2c4e3b43cd" containerName="keystone-bootstrap" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.386099 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="30efc86e-0c26-42e4-b907-1d4d985912ed" containerName="placement-db-sync" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.386109 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="93b88bfc-e293-4af3-a085-184607bf9327" containerName="dnsmasq-dns" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.386118 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="96fe12c6-435c-4ef9-a340-c15cd050d898" containerName="watcher-decision-engine" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.386745 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.390094 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="58b1eaea-5735-4c71-9c13-83bbece4cb4a" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.169:9322/\": dial tcp 10.217.0.169:9322: connect: connection refused" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.390445 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.401825 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d589bd999-klfsc" event={"ID":"93b88bfc-e293-4af3-a085-184607bf9327","Type":"ContainerDied","Data":"4479d0a19d18775cbdda9e3e29eb2fb3a08c6720c8c950eb49addc462844cb3a"} Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.401884 4932 scope.go:117] "RemoveContainer" containerID="f52951b30b5592f2aeb5eae2773bb2ba20887b8705143fd09cf41ec26c0f786e" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.402100 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d589bd999-klfsc" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.408867 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.410556 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-cpzcj" podStartSLOduration=4.655847196 podStartE2EDuration="41.410522229s" podCreationTimestamp="2026-02-18 19:53:26 +0000 UTC" firstStartedPulling="2026-02-18 19:53:30.098610264 +0000 UTC m=+1173.680565109" lastFinishedPulling="2026-02-18 19:54:06.853285297 +0000 UTC m=+1210.435240142" observedRunningTime="2026-02-18 19:54:07.375527377 +0000 UTC m=+1210.957482222" watchObservedRunningTime="2026-02-18 19:54:07.410522229 +0000 UTC m=+1210.992477074" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.430820 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "93b88bfc-e293-4af3-a085-184607bf9327" (UID: "93b88bfc-e293-4af3-a085-184607bf9327"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.430977 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "93b88bfc-e293-4af3-a085-184607bf9327" (UID: "93b88bfc-e293-4af3-a085-184607bf9327"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.448322 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=6.448298509 podStartE2EDuration="6.448298509s" podCreationTimestamp="2026-02-18 19:54:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:07.411599885 +0000 UTC m=+1210.993554730" watchObservedRunningTime="2026-02-18 19:54:07.448298509 +0000 UTC m=+1211.030253364" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.451894 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "93b88bfc-e293-4af3-a085-184607bf9327" (UID: "93b88bfc-e293-4af3-a085-184607bf9327"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.453072 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-config" (OuterVolumeSpecName: "config") pod "93b88bfc-e293-4af3-a085-184607bf9327" (UID: "93b88bfc-e293-4af3-a085-184607bf9327"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:07 crc kubenswrapper[4932]: E0218 19:54:07.454823 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:07 crc kubenswrapper[4932]: E0218 19:54:07.457151 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:07 crc kubenswrapper[4932]: E0218 19:54:07.458743 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:07 crc kubenswrapper[4932]: E0218 19:54:07.458777 4932 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="5bd90883-79db-4903-87ab-828b9608f9fa" containerName="watcher-applier" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.468022 4932 scope.go:117] "RemoveContainer" containerID="a93d81ac35fef706c2981873bb26c1272af93758393d8995dcc39345d9e18399" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.487182 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-config-data\") pod \"watcher-decision-engine-0\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.487230 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tb6k\" (UniqueName: \"kubernetes.io/projected/0882c686-1b07-4ac7-a6be-148eff7faa19-kube-api-access-9tb6k\") pod \"watcher-decision-engine-0\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.487264 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.487290 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0882c686-1b07-4ac7-a6be-148eff7faa19-logs\") pod \"watcher-decision-engine-0\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.487375 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.487732 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.487747 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.487757 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.487766 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93b88bfc-e293-4af3-a085-184607bf9327-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.547257 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.547307 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.589287 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-config-data\") pod \"watcher-decision-engine-0\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.589338 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tb6k\" (UniqueName: \"kubernetes.io/projected/0882c686-1b07-4ac7-a6be-148eff7faa19-kube-api-access-9tb6k\") pod \"watcher-decision-engine-0\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.589370 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.589394 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0882c686-1b07-4ac7-a6be-148eff7faa19-logs\") pod \"watcher-decision-engine-0\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.589414 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.592129 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0882c686-1b07-4ac7-a6be-148eff7faa19-logs\") pod \"watcher-decision-engine-0\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.594046 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-config-data\") pod \"watcher-decision-engine-0\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.594792 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.601440 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.607835 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tb6k\" (UniqueName: \"kubernetes.io/projected/0882c686-1b07-4ac7-a6be-148eff7faa19-kube-api-access-9tb6k\") pod \"watcher-decision-engine-0\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.611979 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.612478 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.739444 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.903852 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d589bd999-klfsc"] Feb 18 19:54:07 crc kubenswrapper[4932]: I0218 19:54:07.916217 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d589bd999-klfsc"] Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.045474 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-dc76b87d8-4l7z8"] Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.048555 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.054806 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.055016 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.055039 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.070135 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-s8zmw" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.070344 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.092556 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-dc76b87d8-4l7z8"] Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.114085 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5dc9dbf7f4-c6vxb"] Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.116161 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.119683 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.119683 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.120338 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.120580 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.120692 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.120946 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sk7x7" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.121142 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-config-data\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.121182 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86cc3d08-5639-4155-bee3-b1f461184a24-logs\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.121200 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-combined-ca-bundle\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.121243 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-public-tls-certs\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.121448 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-internal-tls-certs\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.121660 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq2zn\" (UniqueName: \"kubernetes.io/projected/86cc3d08-5639-4155-bee3-b1f461184a24-kube-api-access-hq2zn\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.121713 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-scripts\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.130157 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5dc9dbf7f4-c6vxb"] Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.223386 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-scripts\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.223477 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-internal-tls-certs\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.223551 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-config-data\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.223579 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86cc3d08-5639-4155-bee3-b1f461184a24-logs\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.223600 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-combined-ca-bundle\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.223634 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-public-tls-certs\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.223688 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgt4b\" (UniqueName: \"kubernetes.io/projected/5742307d-705d-4197-bab4-53ec94801b4d-kube-api-access-bgt4b\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.223711 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-credential-keys\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.223733 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-public-tls-certs\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.223780 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-internal-tls-certs\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.223809 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-fernet-keys\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.223841 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-config-data\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.223885 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-combined-ca-bundle\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.223943 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq2zn\" (UniqueName: \"kubernetes.io/projected/86cc3d08-5639-4155-bee3-b1f461184a24-kube-api-access-hq2zn\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.223981 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-scripts\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.224708 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86cc3d08-5639-4155-bee3-b1f461184a24-logs\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.231770 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-config-data\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.232047 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-scripts\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.232212 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-combined-ca-bundle\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.234469 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-internal-tls-certs\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.241926 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq2zn\" (UniqueName: \"kubernetes.io/projected/86cc3d08-5639-4155-bee3-b1f461184a24-kube-api-access-hq2zn\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.242209 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-public-tls-certs\") pod \"placement-dc76b87d8-4l7z8\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.268382 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.325350 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-scripts\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.325404 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-internal-tls-certs\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.325452 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-public-tls-certs\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.325487 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgt4b\" (UniqueName: \"kubernetes.io/projected/5742307d-705d-4197-bab4-53ec94801b4d-kube-api-access-bgt4b\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.325505 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-credential-keys\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.325544 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-fernet-keys\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.325559 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-config-data\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.325582 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-combined-ca-bundle\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.330157 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-combined-ca-bundle\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.336849 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-scripts\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.337333 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-config-data\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.339072 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-public-tls-certs\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.341313 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-fernet-keys\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.343335 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-credential-keys\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.343630 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5742307d-705d-4197-bab4-53ec94801b4d-internal-tls-certs\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.346051 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgt4b\" (UniqueName: \"kubernetes.io/projected/5742307d-705d-4197-bab4-53ec94801b4d-kube-api-access-bgt4b\") pod \"keystone-5dc9dbf7f4-c6vxb\" (UID: \"5742307d-705d-4197-bab4-53ec94801b4d\") " pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.385869 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.451489 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.452906 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0882c686-1b07-4ac7-a6be-148eff7faa19","Type":"ContainerStarted","Data":"04f5dff2832c6635da78aa840490b39a4906ea50c8d89ba21f85a3c5474f7c9b"} Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.465314 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"079e3d7d-bd4f-4198-8606-95192a514c07","Type":"ContainerStarted","Data":"41bbf0004efdd834aa04334d49096cca68b41c5d9f117836f2c8dc6fe6f5d5be"} Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.465370 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.465386 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.689260 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.689762 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.787198 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.804073 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-dc76b87d8-4l7z8"] Feb 18 19:54:08 crc kubenswrapper[4932]: I0218 19:54:08.811058 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 19:54:09 crc kubenswrapper[4932]: I0218 19:54:09.067219 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5dc9dbf7f4-c6vxb"] Feb 18 19:54:09 crc kubenswrapper[4932]: I0218 19:54:09.196317 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93b88bfc-e293-4af3-a085-184607bf9327" path="/var/lib/kubelet/pods/93b88bfc-e293-4af3-a085-184607bf9327/volumes" Feb 18 19:54:09 crc kubenswrapper[4932]: I0218 19:54:09.196939 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96fe12c6-435c-4ef9-a340-c15cd050d898" path="/var/lib/kubelet/pods/96fe12c6-435c-4ef9-a340-c15cd050d898/volumes" Feb 18 19:54:09 crc kubenswrapper[4932]: I0218 19:54:09.487731 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0882c686-1b07-4ac7-a6be-148eff7faa19","Type":"ContainerStarted","Data":"4de093a700139da46d3f66815f5051f7a579a847fbfe3c9c9fef66a2d56e8e8c"} Feb 18 19:54:09 crc kubenswrapper[4932]: I0218 19:54:09.497657 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dc76b87d8-4l7z8" event={"ID":"86cc3d08-5639-4155-bee3-b1f461184a24","Type":"ContainerStarted","Data":"39753a946eb8a2d631a153f1f2e754fb36ce9fa30fb838383236eaf4f306d8fd"} Feb 18 19:54:09 crc kubenswrapper[4932]: I0218 19:54:09.497699 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dc76b87d8-4l7z8" event={"ID":"86cc3d08-5639-4155-bee3-b1f461184a24","Type":"ContainerStarted","Data":"e0a8661d91abe6650a2644a6bbb68f5a9be137c080d73c71bfeeedb79d7a94d1"} Feb 18 19:54:09 crc kubenswrapper[4932]: I0218 19:54:09.497711 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dc76b87d8-4l7z8" event={"ID":"86cc3d08-5639-4155-bee3-b1f461184a24","Type":"ContainerStarted","Data":"f46833318ddc8961d6f04764c058cb88d8c7c195fabe7b752747972666313452"} Feb 18 19:54:09 crc kubenswrapper[4932]: I0218 19:54:09.498452 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:09 crc kubenswrapper[4932]: I0218 19:54:09.498477 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:09 crc kubenswrapper[4932]: I0218 19:54:09.501616 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5dc9dbf7f4-c6vxb" event={"ID":"5742307d-705d-4197-bab4-53ec94801b4d","Type":"ContainerStarted","Data":"fbac170756bef77569ea71ae6716558e3b3fc9b88cbfad79c6fd09a46e1aab16"} Feb 18 19:54:09 crc kubenswrapper[4932]: I0218 19:54:09.501641 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5dc9dbf7f4-c6vxb" event={"ID":"5742307d-705d-4197-bab4-53ec94801b4d","Type":"ContainerStarted","Data":"eee41656359fb6205565b7a7e83c3774339a7dc7f5af7f3790ebb1fed632c786"} Feb 18 19:54:09 crc kubenswrapper[4932]: I0218 19:54:09.503465 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 19:54:09 crc kubenswrapper[4932]: I0218 19:54:09.503622 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 19:54:09 crc kubenswrapper[4932]: I0218 19:54:09.514742 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.514720114 podStartE2EDuration="2.514720114s" podCreationTimestamp="2026-02-18 19:54:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:09.501423806 +0000 UTC m=+1213.083378661" watchObservedRunningTime="2026-02-18 19:54:09.514720114 +0000 UTC m=+1213.096674969" Feb 18 19:54:09 crc kubenswrapper[4932]: I0218 19:54:09.521219 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-dc76b87d8-4l7z8" podStartSLOduration=2.521204563 podStartE2EDuration="2.521204563s" podCreationTimestamp="2026-02-18 19:54:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:09.518604589 +0000 UTC m=+1213.100559434" watchObservedRunningTime="2026-02-18 19:54:09.521204563 +0000 UTC m=+1213.103159398" Feb 18 19:54:09 crc kubenswrapper[4932]: I0218 19:54:09.545124 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5dc9dbf7f4-c6vxb" podStartSLOduration=1.5451058020000001 podStartE2EDuration="1.545105802s" podCreationTimestamp="2026-02-18 19:54:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:09.541663117 +0000 UTC m=+1213.123617962" watchObservedRunningTime="2026-02-18 19:54:09.545105802 +0000 UTC m=+1213.127060647" Feb 18 19:54:10 crc kubenswrapper[4932]: I0218 19:54:10.509590 4932 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:54:10 crc kubenswrapper[4932]: I0218 19:54:10.509824 4932 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:54:10 crc kubenswrapper[4932]: I0218 19:54:10.512214 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:11 crc kubenswrapper[4932]: I0218 19:54:11.098876 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 19:54:11 crc kubenswrapper[4932]: I0218 19:54:11.100842 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 19:54:11 crc kubenswrapper[4932]: I0218 19:54:11.530109 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nqxxn" event={"ID":"3f831817-b833-4ee3-b1e9-77d9c02416ed","Type":"ContainerStarted","Data":"80213ebfed248f23a59e2cc3d7242b684303a348ef8453068ab05718b9f4df29"} Feb 18 19:54:11 crc kubenswrapper[4932]: I0218 19:54:11.530411 4932 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:54:11 crc kubenswrapper[4932]: I0218 19:54:11.530436 4932 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:54:11 crc kubenswrapper[4932]: I0218 19:54:11.553584 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-nqxxn" podStartSLOduration=4.791997867 podStartE2EDuration="45.553562998s" podCreationTimestamp="2026-02-18 19:53:26 +0000 UTC" firstStartedPulling="2026-02-18 19:53:28.577180409 +0000 UTC m=+1172.159135254" lastFinishedPulling="2026-02-18 19:54:09.33874554 +0000 UTC m=+1212.920700385" observedRunningTime="2026-02-18 19:54:11.549676162 +0000 UTC m=+1215.131631007" watchObservedRunningTime="2026-02-18 19:54:11.553562998 +0000 UTC m=+1215.135517843" Feb 18 19:54:11 crc kubenswrapper[4932]: I0218 19:54:11.697943 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 18 19:54:11 crc kubenswrapper[4932]: I0218 19:54:11.697992 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 19:54:11 crc kubenswrapper[4932]: I0218 19:54:11.987870 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 18 19:54:11 crc kubenswrapper[4932]: I0218 19:54:11.988447 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.279450 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-85d5f6489d-gxmwz"] Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.280867 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.334229 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85d5f6489d-gxmwz"] Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.451025 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/394a5313-f592-47b5-92ce-5f87a10335d7-public-tls-certs\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.451070 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/394a5313-f592-47b5-92ce-5f87a10335d7-scripts\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.451096 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/394a5313-f592-47b5-92ce-5f87a10335d7-internal-tls-certs\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.451209 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/394a5313-f592-47b5-92ce-5f87a10335d7-logs\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.451427 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dr5h\" (UniqueName: \"kubernetes.io/projected/394a5313-f592-47b5-92ce-5f87a10335d7-kube-api-access-4dr5h\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.451502 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/394a5313-f592-47b5-92ce-5f87a10335d7-config-data\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.451663 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/394a5313-f592-47b5-92ce-5f87a10335d7-combined-ca-bundle\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: E0218 19:54:12.457556 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:12 crc kubenswrapper[4932]: E0218 19:54:12.460803 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:12 crc kubenswrapper[4932]: E0218 19:54:12.464028 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:12 crc kubenswrapper[4932]: E0218 19:54:12.464085 4932 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="5bd90883-79db-4903-87ab-828b9608f9fa" containerName="watcher-applier" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.542928 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.554034 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dr5h\" (UniqueName: \"kubernetes.io/projected/394a5313-f592-47b5-92ce-5f87a10335d7-kube-api-access-4dr5h\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.554125 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/394a5313-f592-47b5-92ce-5f87a10335d7-config-data\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.554233 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/394a5313-f592-47b5-92ce-5f87a10335d7-combined-ca-bundle\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.554288 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/394a5313-f592-47b5-92ce-5f87a10335d7-public-tls-certs\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.554311 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/394a5313-f592-47b5-92ce-5f87a10335d7-scripts\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.554336 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/394a5313-f592-47b5-92ce-5f87a10335d7-internal-tls-certs\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.554400 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/394a5313-f592-47b5-92ce-5f87a10335d7-logs\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.554992 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/394a5313-f592-47b5-92ce-5f87a10335d7-logs\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.559771 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/394a5313-f592-47b5-92ce-5f87a10335d7-scripts\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.561237 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/394a5313-f592-47b5-92ce-5f87a10335d7-internal-tls-certs\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.561848 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/394a5313-f592-47b5-92ce-5f87a10335d7-combined-ca-bundle\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.572654 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/394a5313-f592-47b5-92ce-5f87a10335d7-public-tls-certs\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.576786 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/394a5313-f592-47b5-92ce-5f87a10335d7-config-data\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.577849 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dr5h\" (UniqueName: \"kubernetes.io/projected/394a5313-f592-47b5-92ce-5f87a10335d7-kube-api-access-4dr5h\") pod \"placement-85d5f6489d-gxmwz\" (UID: \"394a5313-f592-47b5-92ce-5f87a10335d7\") " pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:12 crc kubenswrapper[4932]: I0218 19:54:12.629034 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:13 crc kubenswrapper[4932]: I0218 19:54:13.240462 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85d5f6489d-gxmwz"] Feb 18 19:54:13 crc kubenswrapper[4932]: I0218 19:54:13.310600 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 19:54:13 crc kubenswrapper[4932]: I0218 19:54:13.310701 4932 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:54:13 crc kubenswrapper[4932]: I0218 19:54:13.320949 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 19:54:13 crc kubenswrapper[4932]: I0218 19:54:13.566932 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85d5f6489d-gxmwz" event={"ID":"394a5313-f592-47b5-92ce-5f87a10335d7","Type":"ContainerStarted","Data":"212201d63b6b8e58f239be487bb94cbe807f228155c6382d5e410092510cb942"} Feb 18 19:54:13 crc kubenswrapper[4932]: I0218 19:54:13.567196 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85d5f6489d-gxmwz" event={"ID":"394a5313-f592-47b5-92ce-5f87a10335d7","Type":"ContainerStarted","Data":"ce6d6eddda8600e13002c7cc13064dfd297cc5341490d80518ec2257b4ef9593"} Feb 18 19:54:14 crc kubenswrapper[4932]: I0218 19:54:14.579866 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85d5f6489d-gxmwz" event={"ID":"394a5313-f592-47b5-92ce-5f87a10335d7","Type":"ContainerStarted","Data":"4d80aa8aa803e3cd96c7fe9f9a4dc873cd21201cdcec6d6cabe27f7bf9577faf"} Feb 18 19:54:14 crc kubenswrapper[4932]: I0218 19:54:14.581338 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:14 crc kubenswrapper[4932]: I0218 19:54:14.581377 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:14 crc kubenswrapper[4932]: I0218 19:54:14.584006 4932 generic.go:334] "Generic (PLEG): container finished" podID="0882c686-1b07-4ac7-a6be-148eff7faa19" containerID="4de093a700139da46d3f66815f5051f7a579a847fbfe3c9c9fef66a2d56e8e8c" exitCode=1 Feb 18 19:54:14 crc kubenswrapper[4932]: I0218 19:54:14.584705 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0882c686-1b07-4ac7-a6be-148eff7faa19","Type":"ContainerDied","Data":"4de093a700139da46d3f66815f5051f7a579a847fbfe3c9c9fef66a2d56e8e8c"} Feb 18 19:54:14 crc kubenswrapper[4932]: I0218 19:54:14.585077 4932 scope.go:117] "RemoveContainer" containerID="4de093a700139da46d3f66815f5051f7a579a847fbfe3c9c9fef66a2d56e8e8c" Feb 18 19:54:14 crc kubenswrapper[4932]: I0218 19:54:14.622448 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-85d5f6489d-gxmwz" podStartSLOduration=2.622426286 podStartE2EDuration="2.622426286s" podCreationTimestamp="2026-02-18 19:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:14.621139825 +0000 UTC m=+1218.203094670" watchObservedRunningTime="2026-02-18 19:54:14.622426286 +0000 UTC m=+1218.204381131" Feb 18 19:54:15 crc kubenswrapper[4932]: I0218 19:54:15.597268 4932 generic.go:334] "Generic (PLEG): container finished" podID="43f771cb-173f-4939-b1d1-e7d1b21834cb" containerID="49092cff964806110781a1ce6f40a2126d58bcb45c2544f984759951802714c3" exitCode=0 Feb 18 19:54:15 crc kubenswrapper[4932]: I0218 19:54:15.597350 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cpzcj" event={"ID":"43f771cb-173f-4939-b1d1-e7d1b21834cb","Type":"ContainerDied","Data":"49092cff964806110781a1ce6f40a2126d58bcb45c2544f984759951802714c3"} Feb 18 19:54:16 crc kubenswrapper[4932]: I0218 19:54:16.319261 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:54:16 crc kubenswrapper[4932]: I0218 19:54:16.319498 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="58b1eaea-5735-4c71-9c13-83bbece4cb4a" containerName="watcher-api-log" containerID="cri-o://914e01d59497860d56f1ffb2b4e5a0d0f4b154e5f7add6b40136f9f6dda7044e" gracePeriod=30 Feb 18 19:54:16 crc kubenswrapper[4932]: I0218 19:54:16.319599 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="58b1eaea-5735-4c71-9c13-83bbece4cb4a" containerName="watcher-api" containerID="cri-o://7e4457ed87ab79af627e36e59e08bc3082309f2331d169ed0c87ae852f7b68d1" gracePeriod=30 Feb 18 19:54:16 crc kubenswrapper[4932]: I0218 19:54:16.607149 4932 generic.go:334] "Generic (PLEG): container finished" podID="58b1eaea-5735-4c71-9c13-83bbece4cb4a" containerID="914e01d59497860d56f1ffb2b4e5a0d0f4b154e5f7add6b40136f9f6dda7044e" exitCode=143 Feb 18 19:54:16 crc kubenswrapper[4932]: I0218 19:54:16.607241 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"58b1eaea-5735-4c71-9c13-83bbece4cb4a","Type":"ContainerDied","Data":"914e01d59497860d56f1ffb2b4e5a0d0f4b154e5f7add6b40136f9f6dda7044e"} Feb 18 19:54:17 crc kubenswrapper[4932]: I0218 19:54:17.173654 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="58b1eaea-5735-4c71-9c13-83bbece4cb4a" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.169:9322/\": read tcp 10.217.0.2:59718->10.217.0.169:9322: read: connection reset by peer" Feb 18 19:54:17 crc kubenswrapper[4932]: I0218 19:54:17.173683 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="58b1eaea-5735-4c71-9c13-83bbece4cb4a" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.169:9322/\": read tcp 10.217.0.2:59710->10.217.0.169:9322: read: connection reset by peer" Feb 18 19:54:17 crc kubenswrapper[4932]: E0218 19:54:17.452663 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:17 crc kubenswrapper[4932]: E0218 19:54:17.455799 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:17 crc kubenswrapper[4932]: E0218 19:54:17.459671 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:17 crc kubenswrapper[4932]: E0218 19:54:17.459703 4932 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="5bd90883-79db-4903-87ab-828b9608f9fa" containerName="watcher-applier" Feb 18 19:54:17 crc kubenswrapper[4932]: I0218 19:54:17.621522 4932 generic.go:334] "Generic (PLEG): container finished" podID="58b1eaea-5735-4c71-9c13-83bbece4cb4a" containerID="7e4457ed87ab79af627e36e59e08bc3082309f2331d169ed0c87ae852f7b68d1" exitCode=0 Feb 18 19:54:17 crc kubenswrapper[4932]: I0218 19:54:17.621565 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"58b1eaea-5735-4c71-9c13-83bbece4cb4a","Type":"ContainerDied","Data":"7e4457ed87ab79af627e36e59e08bc3082309f2331d169ed0c87ae852f7b68d1"} Feb 18 19:54:17 crc kubenswrapper[4932]: I0218 19:54:17.739947 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:54:17 crc kubenswrapper[4932]: I0218 19:54:17.739990 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:54:17 crc kubenswrapper[4932]: I0218 19:54:17.987606 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cpzcj" Feb 18 19:54:18 crc kubenswrapper[4932]: I0218 19:54:18.098945 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zn59\" (UniqueName: \"kubernetes.io/projected/43f771cb-173f-4939-b1d1-e7d1b21834cb-kube-api-access-4zn59\") pod \"43f771cb-173f-4939-b1d1-e7d1b21834cb\" (UID: \"43f771cb-173f-4939-b1d1-e7d1b21834cb\") " Feb 18 19:54:18 crc kubenswrapper[4932]: I0218 19:54:18.099034 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/43f771cb-173f-4939-b1d1-e7d1b21834cb-db-sync-config-data\") pod \"43f771cb-173f-4939-b1d1-e7d1b21834cb\" (UID: \"43f771cb-173f-4939-b1d1-e7d1b21834cb\") " Feb 18 19:54:18 crc kubenswrapper[4932]: I0218 19:54:18.099283 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f771cb-173f-4939-b1d1-e7d1b21834cb-combined-ca-bundle\") pod \"43f771cb-173f-4939-b1d1-e7d1b21834cb\" (UID: \"43f771cb-173f-4939-b1d1-e7d1b21834cb\") " Feb 18 19:54:18 crc kubenswrapper[4932]: I0218 19:54:18.112650 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43f771cb-173f-4939-b1d1-e7d1b21834cb-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "43f771cb-173f-4939-b1d1-e7d1b21834cb" (UID: "43f771cb-173f-4939-b1d1-e7d1b21834cb"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:18 crc kubenswrapper[4932]: I0218 19:54:18.114479 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43f771cb-173f-4939-b1d1-e7d1b21834cb-kube-api-access-4zn59" (OuterVolumeSpecName: "kube-api-access-4zn59") pod "43f771cb-173f-4939-b1d1-e7d1b21834cb" (UID: "43f771cb-173f-4939-b1d1-e7d1b21834cb"). InnerVolumeSpecName "kube-api-access-4zn59". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:18 crc kubenswrapper[4932]: I0218 19:54:18.145334 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43f771cb-173f-4939-b1d1-e7d1b21834cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43f771cb-173f-4939-b1d1-e7d1b21834cb" (UID: "43f771cb-173f-4939-b1d1-e7d1b21834cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:18 crc kubenswrapper[4932]: I0218 19:54:18.201899 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zn59\" (UniqueName: \"kubernetes.io/projected/43f771cb-173f-4939-b1d1-e7d1b21834cb-kube-api-access-4zn59\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:18 crc kubenswrapper[4932]: I0218 19:54:18.201940 4932 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/43f771cb-173f-4939-b1d1-e7d1b21834cb-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:18 crc kubenswrapper[4932]: I0218 19:54:18.201953 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f771cb-173f-4939-b1d1-e7d1b21834cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:18 crc kubenswrapper[4932]: I0218 19:54:18.604801 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:54:18 crc kubenswrapper[4932]: I0218 19:54:18.634784 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cpzcj" event={"ID":"43f771cb-173f-4939-b1d1-e7d1b21834cb","Type":"ContainerDied","Data":"5c69e97847efcde57a769daf96ea0750cda2a27a34d2c7d54166590315ebcbc1"} Feb 18 19:54:18 crc kubenswrapper[4932]: I0218 19:54:18.634821 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c69e97847efcde57a769daf96ea0750cda2a27a34d2c7d54166590315ebcbc1" Feb 18 19:54:18 crc kubenswrapper[4932]: I0218 19:54:18.634843 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cpzcj" Feb 18 19:54:18 crc kubenswrapper[4932]: I0218 19:54:18.635981 4932 generic.go:334] "Generic (PLEG): container finished" podID="c4c20fc2-cf78-41c9-9e37-c5bea35d472f" containerID="682f69e31fcb10c9b585e4fbecb1e2d4f8e82e3ec0c03204e9e0fefc1d901753" exitCode=0 Feb 18 19:54:18 crc kubenswrapper[4932]: I0218 19:54:18.636006 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kfzmp" event={"ID":"c4c20fc2-cf78-41c9-9e37-c5bea35d472f","Type":"ContainerDied","Data":"682f69e31fcb10c9b585e4fbecb1e2d4f8e82e3ec0c03204e9e0fefc1d901753"} Feb 18 19:54:18 crc kubenswrapper[4932]: I0218 19:54:18.761562 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.272751 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-69669cb55f-sp2x2"] Feb 18 19:54:19 crc kubenswrapper[4932]: E0218 19:54:19.273674 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43f771cb-173f-4939-b1d1-e7d1b21834cb" containerName="barbican-db-sync" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.273700 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="43f771cb-173f-4939-b1d1-e7d1b21834cb" containerName="barbican-db-sync" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.273959 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="43f771cb-173f-4939-b1d1-e7d1b21834cb" containerName="barbican-db-sync" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.275549 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-69669cb55f-sp2x2" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.281209 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.281366 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.281560 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-sxgcc" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.291568 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-69669cb55f-sp2x2"] Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.320634 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm"] Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.323643 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.326931 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.344688 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.359213 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm"] Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.432032 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05e25333-fed2-4944-8c7e-151c0bd6ab6c-logs\") pod \"barbican-worker-69669cb55f-sp2x2\" (UID: \"05e25333-fed2-4944-8c7e-151c0bd6ab6c\") " pod="openstack/barbican-worker-69669cb55f-sp2x2" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.432112 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b01fc13-e894-46fd-8f24-d9ccdbce09e0-config-data-custom\") pod \"barbican-keystone-listener-5c5bd6cc9b-42nrm\" (UID: \"6b01fc13-e894-46fd-8f24-d9ccdbce09e0\") " pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.432140 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05e25333-fed2-4944-8c7e-151c0bd6ab6c-config-data-custom\") pod \"barbican-worker-69669cb55f-sp2x2\" (UID: \"05e25333-fed2-4944-8c7e-151c0bd6ab6c\") " pod="openstack/barbican-worker-69669cb55f-sp2x2" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.432239 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b01fc13-e894-46fd-8f24-d9ccdbce09e0-logs\") pod \"barbican-keystone-listener-5c5bd6cc9b-42nrm\" (UID: \"6b01fc13-e894-46fd-8f24-d9ccdbce09e0\") " pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.432273 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkz56\" (UniqueName: \"kubernetes.io/projected/05e25333-fed2-4944-8c7e-151c0bd6ab6c-kube-api-access-bkz56\") pod \"barbican-worker-69669cb55f-sp2x2\" (UID: \"05e25333-fed2-4944-8c7e-151c0bd6ab6c\") " pod="openstack/barbican-worker-69669cb55f-sp2x2" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.432310 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b01fc13-e894-46fd-8f24-d9ccdbce09e0-combined-ca-bundle\") pod \"barbican-keystone-listener-5c5bd6cc9b-42nrm\" (UID: \"6b01fc13-e894-46fd-8f24-d9ccdbce09e0\") " pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.432336 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg48r\" (UniqueName: \"kubernetes.io/projected/6b01fc13-e894-46fd-8f24-d9ccdbce09e0-kube-api-access-fg48r\") pod \"barbican-keystone-listener-5c5bd6cc9b-42nrm\" (UID: \"6b01fc13-e894-46fd-8f24-d9ccdbce09e0\") " pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.432359 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05e25333-fed2-4944-8c7e-151c0bd6ab6c-combined-ca-bundle\") pod \"barbican-worker-69669cb55f-sp2x2\" (UID: \"05e25333-fed2-4944-8c7e-151c0bd6ab6c\") " pod="openstack/barbican-worker-69669cb55f-sp2x2" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.432428 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05e25333-fed2-4944-8c7e-151c0bd6ab6c-config-data\") pod \"barbican-worker-69669cb55f-sp2x2\" (UID: \"05e25333-fed2-4944-8c7e-151c0bd6ab6c\") " pod="openstack/barbican-worker-69669cb55f-sp2x2" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.432456 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b01fc13-e894-46fd-8f24-d9ccdbce09e0-config-data\") pod \"barbican-keystone-listener-5c5bd6cc9b-42nrm\" (UID: \"6b01fc13-e894-46fd-8f24-d9ccdbce09e0\") " pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.448521 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d4f4bc8df-ddvv7"] Feb 18 19:54:19 crc kubenswrapper[4932]: E0218 19:54:19.448892 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58b1eaea-5735-4c71-9c13-83bbece4cb4a" containerName="watcher-api" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.448902 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="58b1eaea-5735-4c71-9c13-83bbece4cb4a" containerName="watcher-api" Feb 18 19:54:19 crc kubenswrapper[4932]: E0218 19:54:19.448926 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58b1eaea-5735-4c71-9c13-83bbece4cb4a" containerName="watcher-api-log" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.448932 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="58b1eaea-5735-4c71-9c13-83bbece4cb4a" containerName="watcher-api-log" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.449126 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="58b1eaea-5735-4c71-9c13-83bbece4cb4a" containerName="watcher-api" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.449142 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="58b1eaea-5735-4c71-9c13-83bbece4cb4a" containerName="watcher-api-log" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.450080 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.459739 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d4f4bc8df-ddvv7"] Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.533818 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-combined-ca-bundle\") pod \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.534557 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-custom-prometheus-ca\") pod \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.535016 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fdcl\" (UniqueName: \"kubernetes.io/projected/58b1eaea-5735-4c71-9c13-83bbece4cb4a-kube-api-access-5fdcl\") pod \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.535201 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-config-data\") pod \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.535392 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58b1eaea-5735-4c71-9c13-83bbece4cb4a-logs\") pod \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\" (UID: \"58b1eaea-5735-4c71-9c13-83bbece4cb4a\") " Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.535833 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b01fc13-e894-46fd-8f24-d9ccdbce09e0-config-data-custom\") pod \"barbican-keystone-listener-5c5bd6cc9b-42nrm\" (UID: \"6b01fc13-e894-46fd-8f24-d9ccdbce09e0\") " pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.535934 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05e25333-fed2-4944-8c7e-151c0bd6ab6c-config-data-custom\") pod \"barbican-worker-69669cb55f-sp2x2\" (UID: \"05e25333-fed2-4944-8c7e-151c0bd6ab6c\") " pod="openstack/barbican-worker-69669cb55f-sp2x2" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.536095 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b01fc13-e894-46fd-8f24-d9ccdbce09e0-logs\") pod \"barbican-keystone-listener-5c5bd6cc9b-42nrm\" (UID: \"6b01fc13-e894-46fd-8f24-d9ccdbce09e0\") " pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.536223 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkz56\" (UniqueName: \"kubernetes.io/projected/05e25333-fed2-4944-8c7e-151c0bd6ab6c-kube-api-access-bkz56\") pod \"barbican-worker-69669cb55f-sp2x2\" (UID: \"05e25333-fed2-4944-8c7e-151c0bd6ab6c\") " pod="openstack/barbican-worker-69669cb55f-sp2x2" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.537892 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b01fc13-e894-46fd-8f24-d9ccdbce09e0-combined-ca-bundle\") pod \"barbican-keystone-listener-5c5bd6cc9b-42nrm\" (UID: \"6b01fc13-e894-46fd-8f24-d9ccdbce09e0\") " pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.538016 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg48r\" (UniqueName: \"kubernetes.io/projected/6b01fc13-e894-46fd-8f24-d9ccdbce09e0-kube-api-access-fg48r\") pod \"barbican-keystone-listener-5c5bd6cc9b-42nrm\" (UID: \"6b01fc13-e894-46fd-8f24-d9ccdbce09e0\") " pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.538141 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05e25333-fed2-4944-8c7e-151c0bd6ab6c-combined-ca-bundle\") pod \"barbican-worker-69669cb55f-sp2x2\" (UID: \"05e25333-fed2-4944-8c7e-151c0bd6ab6c\") " pod="openstack/barbican-worker-69669cb55f-sp2x2" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.538353 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05e25333-fed2-4944-8c7e-151c0bd6ab6c-config-data\") pod \"barbican-worker-69669cb55f-sp2x2\" (UID: \"05e25333-fed2-4944-8c7e-151c0bd6ab6c\") " pod="openstack/barbican-worker-69669cb55f-sp2x2" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.540161 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b01fc13-e894-46fd-8f24-d9ccdbce09e0-config-data\") pod \"barbican-keystone-listener-5c5bd6cc9b-42nrm\" (UID: \"6b01fc13-e894-46fd-8f24-d9ccdbce09e0\") " pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.540293 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05e25333-fed2-4944-8c7e-151c0bd6ab6c-logs\") pod \"barbican-worker-69669cb55f-sp2x2\" (UID: \"05e25333-fed2-4944-8c7e-151c0bd6ab6c\") " pod="openstack/barbican-worker-69669cb55f-sp2x2" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.544468 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05e25333-fed2-4944-8c7e-151c0bd6ab6c-logs\") pod \"barbican-worker-69669cb55f-sp2x2\" (UID: \"05e25333-fed2-4944-8c7e-151c0bd6ab6c\") " pod="openstack/barbican-worker-69669cb55f-sp2x2" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.545366 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05e25333-fed2-4944-8c7e-151c0bd6ab6c-config-data-custom\") pod \"barbican-worker-69669cb55f-sp2x2\" (UID: \"05e25333-fed2-4944-8c7e-151c0bd6ab6c\") " pod="openstack/barbican-worker-69669cb55f-sp2x2" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.545625 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58b1eaea-5735-4c71-9c13-83bbece4cb4a-logs" (OuterVolumeSpecName: "logs") pod "58b1eaea-5735-4c71-9c13-83bbece4cb4a" (UID: "58b1eaea-5735-4c71-9c13-83bbece4cb4a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.545662 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b01fc13-e894-46fd-8f24-d9ccdbce09e0-logs\") pod \"barbican-keystone-listener-5c5bd6cc9b-42nrm\" (UID: \"6b01fc13-e894-46fd-8f24-d9ccdbce09e0\") " pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.551691 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b01fc13-e894-46fd-8f24-d9ccdbce09e0-config-data-custom\") pod \"barbican-keystone-listener-5c5bd6cc9b-42nrm\" (UID: \"6b01fc13-e894-46fd-8f24-d9ccdbce09e0\") " pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.572129 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05e25333-fed2-4944-8c7e-151c0bd6ab6c-config-data\") pod \"barbican-worker-69669cb55f-sp2x2\" (UID: \"05e25333-fed2-4944-8c7e-151c0bd6ab6c\") " pod="openstack/barbican-worker-69669cb55f-sp2x2" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.575063 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58b1eaea-5735-4c71-9c13-83bbece4cb4a-kube-api-access-5fdcl" (OuterVolumeSpecName: "kube-api-access-5fdcl") pod "58b1eaea-5735-4c71-9c13-83bbece4cb4a" (UID: "58b1eaea-5735-4c71-9c13-83bbece4cb4a"). InnerVolumeSpecName "kube-api-access-5fdcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.583900 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05e25333-fed2-4944-8c7e-151c0bd6ab6c-combined-ca-bundle\") pod \"barbican-worker-69669cb55f-sp2x2\" (UID: \"05e25333-fed2-4944-8c7e-151c0bd6ab6c\") " pod="openstack/barbican-worker-69669cb55f-sp2x2" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.595858 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkz56\" (UniqueName: \"kubernetes.io/projected/05e25333-fed2-4944-8c7e-151c0bd6ab6c-kube-api-access-bkz56\") pod \"barbican-worker-69669cb55f-sp2x2\" (UID: \"05e25333-fed2-4944-8c7e-151c0bd6ab6c\") " pod="openstack/barbican-worker-69669cb55f-sp2x2" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.596000 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b01fc13-e894-46fd-8f24-d9ccdbce09e0-combined-ca-bundle\") pod \"barbican-keystone-listener-5c5bd6cc9b-42nrm\" (UID: \"6b01fc13-e894-46fd-8f24-d9ccdbce09e0\") " pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.597363 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg48r\" (UniqueName: \"kubernetes.io/projected/6b01fc13-e894-46fd-8f24-d9ccdbce09e0-kube-api-access-fg48r\") pod \"barbican-keystone-listener-5c5bd6cc9b-42nrm\" (UID: \"6b01fc13-e894-46fd-8f24-d9ccdbce09e0\") " pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.597466 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b01fc13-e894-46fd-8f24-d9ccdbce09e0-config-data\") pod \"barbican-keystone-listener-5c5bd6cc9b-42nrm\" (UID: \"6b01fc13-e894-46fd-8f24-d9ccdbce09e0\") " pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.601091 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7449c5884b-q9l4k"] Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.603919 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.605774 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.612416 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7449c5884b-q9l4k"] Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.629258 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "58b1eaea-5735-4c71-9c13-83bbece4cb4a" (UID: "58b1eaea-5735-4c71-9c13-83bbece4cb4a"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.630207 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58b1eaea-5735-4c71-9c13-83bbece4cb4a" (UID: "58b1eaea-5735-4c71-9c13-83bbece4cb4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.642512 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-ovsdbserver-nb\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.642571 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-config\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.642652 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxn82\" (UniqueName: \"kubernetes.io/projected/d6a2f5f7-e711-48ad-9455-4c9591d751a4-kube-api-access-kxn82\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.642741 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-dns-svc\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.642796 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-ovsdbserver-sb\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.642821 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-dns-swift-storage-0\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.642954 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58b1eaea-5735-4c71-9c13-83bbece4cb4a-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.642972 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.642986 4932 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.642996 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fdcl\" (UniqueName: \"kubernetes.io/projected/58b1eaea-5735-4c71-9c13-83bbece4cb4a-kube-api-access-5fdcl\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.653130 4932 generic.go:334] "Generic (PLEG): container finished" podID="3f831817-b833-4ee3-b1e9-77d9c02416ed" containerID="80213ebfed248f23a59e2cc3d7242b684303a348ef8453068ab05718b9f4df29" exitCode=0 Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.653281 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nqxxn" event={"ID":"3f831817-b833-4ee3-b1e9-77d9c02416ed","Type":"ContainerDied","Data":"80213ebfed248f23a59e2cc3d7242b684303a348ef8453068ab05718b9f4df29"} Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.665432 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-config-data" (OuterVolumeSpecName: "config-data") pod "58b1eaea-5735-4c71-9c13-83bbece4cb4a" (UID: "58b1eaea-5735-4c71-9c13-83bbece4cb4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.667474 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"58b1eaea-5735-4c71-9c13-83bbece4cb4a","Type":"ContainerDied","Data":"f81f3e519e272ec341248a6b7ba9a38b40c5833968d66d807fd43af06ff4634a"} Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.667517 4932 scope.go:117] "RemoveContainer" containerID="7e4457ed87ab79af627e36e59e08bc3082309f2331d169ed0c87ae852f7b68d1" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.667680 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.677098 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0882c686-1b07-4ac7-a6be-148eff7faa19","Type":"ContainerStarted","Data":"a6a123d69d1a4e46268f089397aa0f2920ef5932f2828721d8716a7b45e1e942"} Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.692356 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-69669cb55f-sp2x2" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.696601 4932 scope.go:117] "RemoveContainer" containerID="914e01d59497860d56f1ffb2b4e5a0d0f4b154e5f7add6b40136f9f6dda7044e" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.717623 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.746933 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/505f490e-dca8-49ae-aeeb-3392c065d841-logs\") pod \"barbican-api-7449c5884b-q9l4k\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.746986 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-ovsdbserver-sb\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.747007 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-dns-swift-storage-0\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.747065 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-config-data\") pod \"barbican-api-7449c5884b-q9l4k\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.747101 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-config-data-custom\") pod \"barbican-api-7449c5884b-q9l4k\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.747144 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-ovsdbserver-nb\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.747188 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-config\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.747212 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-combined-ca-bundle\") pod \"barbican-api-7449c5884b-q9l4k\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.747233 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxn82\" (UniqueName: \"kubernetes.io/projected/d6a2f5f7-e711-48ad-9455-4c9591d751a4-kube-api-access-kxn82\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.747254 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9s68\" (UniqueName: \"kubernetes.io/projected/505f490e-dca8-49ae-aeeb-3392c065d841-kube-api-access-b9s68\") pod \"barbican-api-7449c5884b-q9l4k\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.747287 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-dns-svc\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.747358 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b1eaea-5735-4c71-9c13-83bbece4cb4a-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.748152 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-dns-svc\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.748693 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-ovsdbserver-sb\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.749250 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-dns-swift-storage-0\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.749836 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-ovsdbserver-nb\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.752521 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-config\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: E0218 19:54:19.753185 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="079e3d7d-bd4f-4198-8606-95192a514c07" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.761250 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.769953 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxn82\" (UniqueName: \"kubernetes.io/projected/d6a2f5f7-e711-48ad-9455-4c9591d751a4-kube-api-access-kxn82\") pod \"dnsmasq-dns-7d4f4bc8df-ddvv7\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.773079 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.785236 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.786797 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.788937 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.789364 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.789534 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.793680 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.849459 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/505f490e-dca8-49ae-aeeb-3392c065d841-logs\") pod \"barbican-api-7449c5884b-q9l4k\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.849776 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-config-data\") pod \"barbican-api-7449c5884b-q9l4k\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.849823 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-config-data-custom\") pod \"barbican-api-7449c5884b-q9l4k\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.849912 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-combined-ca-bundle\") pod \"barbican-api-7449c5884b-q9l4k\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.849938 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9s68\" (UniqueName: \"kubernetes.io/projected/505f490e-dca8-49ae-aeeb-3392c065d841-kube-api-access-b9s68\") pod \"barbican-api-7449c5884b-q9l4k\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.850668 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/505f490e-dca8-49ae-aeeb-3392c065d841-logs\") pod \"barbican-api-7449c5884b-q9l4k\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.861779 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-config-data-custom\") pod \"barbican-api-7449c5884b-q9l4k\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.872239 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-config-data\") pod \"barbican-api-7449c5884b-q9l4k\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.874091 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9s68\" (UniqueName: \"kubernetes.io/projected/505f490e-dca8-49ae-aeeb-3392c065d841-kube-api-access-b9s68\") pod \"barbican-api-7449c5884b-q9l4k\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.874642 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-combined-ca-bundle\") pod \"barbican-api-7449c5884b-q9l4k\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.925299 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.951638 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km2km\" (UniqueName: \"kubernetes.io/projected/3bab1a8c-1512-4353-90c0-b145865fc593-kube-api-access-km2km\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.951718 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bab1a8c-1512-4353-90c0-b145865fc593-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.951740 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bab1a8c-1512-4353-90c0-b145865fc593-config-data\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.951772 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bab1a8c-1512-4353-90c0-b145865fc593-logs\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.952924 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3bab1a8c-1512-4353-90c0-b145865fc593-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.952990 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bab1a8c-1512-4353-90c0-b145865fc593-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.953145 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bab1a8c-1512-4353-90c0-b145865fc593-public-tls-certs\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:19 crc kubenswrapper[4932]: I0218 19:54:19.955281 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.055586 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bab1a8c-1512-4353-90c0-b145865fc593-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.055927 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3bab1a8c-1512-4353-90c0-b145865fc593-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.055956 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bab1a8c-1512-4353-90c0-b145865fc593-public-tls-certs\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.056012 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km2km\" (UniqueName: \"kubernetes.io/projected/3bab1a8c-1512-4353-90c0-b145865fc593-kube-api-access-km2km\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.056087 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bab1a8c-1512-4353-90c0-b145865fc593-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.056109 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bab1a8c-1512-4353-90c0-b145865fc593-config-data\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.056139 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bab1a8c-1512-4353-90c0-b145865fc593-logs\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.056591 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bab1a8c-1512-4353-90c0-b145865fc593-logs\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.062104 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bab1a8c-1512-4353-90c0-b145865fc593-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.071816 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3bab1a8c-1512-4353-90c0-b145865fc593-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.076715 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bab1a8c-1512-4353-90c0-b145865fc593-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.083922 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km2km\" (UniqueName: \"kubernetes.io/projected/3bab1a8c-1512-4353-90c0-b145865fc593-kube-api-access-km2km\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.085362 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bab1a8c-1512-4353-90c0-b145865fc593-public-tls-certs\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.092839 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bab1a8c-1512-4353-90c0-b145865fc593-config-data\") pod \"watcher-api-0\" (UID: \"3bab1a8c-1512-4353-90c0-b145865fc593\") " pod="openstack/watcher-api-0" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.109477 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.155079 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kfzmp" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.262841 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-config\") pod \"c4c20fc2-cf78-41c9-9e37-c5bea35d472f\" (UID: \"c4c20fc2-cf78-41c9-9e37-c5bea35d472f\") " Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.262893 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-combined-ca-bundle\") pod \"c4c20fc2-cf78-41c9-9e37-c5bea35d472f\" (UID: \"c4c20fc2-cf78-41c9-9e37-c5bea35d472f\") " Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.263050 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpwbx\" (UniqueName: \"kubernetes.io/projected/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-kube-api-access-mpwbx\") pod \"c4c20fc2-cf78-41c9-9e37-c5bea35d472f\" (UID: \"c4c20fc2-cf78-41c9-9e37-c5bea35d472f\") " Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.268899 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-kube-api-access-mpwbx" (OuterVolumeSpecName: "kube-api-access-mpwbx") pod "c4c20fc2-cf78-41c9-9e37-c5bea35d472f" (UID: "c4c20fc2-cf78-41c9-9e37-c5bea35d472f"). InnerVolumeSpecName "kube-api-access-mpwbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.335322 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4c20fc2-cf78-41c9-9e37-c5bea35d472f" (UID: "c4c20fc2-cf78-41c9-9e37-c5bea35d472f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.336638 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-config" (OuterVolumeSpecName: "config") pod "c4c20fc2-cf78-41c9-9e37-c5bea35d472f" (UID: "c4c20fc2-cf78-41c9-9e37-c5bea35d472f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.364896 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.364933 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.364944 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpwbx\" (UniqueName: \"kubernetes.io/projected/c4c20fc2-cf78-41c9-9e37-c5bea35d472f-kube-api-access-mpwbx\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.383187 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm"] Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.395552 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-69669cb55f-sp2x2"] Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.610934 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7449c5884b-q9l4k"] Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.634348 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d4f4bc8df-ddvv7"] Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.774483 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" event={"ID":"d6a2f5f7-e711-48ad-9455-4c9591d751a4","Type":"ContainerStarted","Data":"80607fa7dffc51679b1e994f65af38dcef63e402a63bb96a1efa6d78960754ca"} Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.795888 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.799815 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"079e3d7d-bd4f-4198-8606-95192a514c07","Type":"ContainerStarted","Data":"2b1e75d16cb30a9c6ffb3c5157c9587182f8a699106125481032c4efb8da098d"} Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.799980 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="079e3d7d-bd4f-4198-8606-95192a514c07" containerName="ceilometer-notification-agent" containerID="cri-o://c7ee5732776c18a927c72c5ff1cc708a0c4c7cbb7be39c25d6f15f19eb006153" gracePeriod=30 Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.800225 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.800500 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="079e3d7d-bd4f-4198-8606-95192a514c07" containerName="proxy-httpd" containerID="cri-o://2b1e75d16cb30a9c6ffb3c5157c9587182f8a699106125481032c4efb8da098d" gracePeriod=30 Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.800548 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="079e3d7d-bd4f-4198-8606-95192a514c07" containerName="sg-core" containerID="cri-o://41bbf0004efdd834aa04334d49096cca68b41c5d9f117836f2c8dc6fe6f5d5be" gracePeriod=30 Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.807093 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-69669cb55f-sp2x2" event={"ID":"05e25333-fed2-4944-8c7e-151c0bd6ab6c","Type":"ContainerStarted","Data":"cdfbe590e51619b2f5eefc007b6ced59292930f1279dfa6fd7af6821d4acb829"} Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.825875 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d4f4bc8df-ddvv7"] Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.861546 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" event={"ID":"6b01fc13-e894-46fd-8f24-d9ccdbce09e0","Type":"ContainerStarted","Data":"db5629b00eceeeaf6a12066f216b0267dcb1e9ee48dd48e12f7a4e2e2d732d15"} Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.880198 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7449c5884b-q9l4k" event={"ID":"505f490e-dca8-49ae-aeeb-3392c065d841","Type":"ContainerStarted","Data":"c7d003d8c5cc0d3edc83d2a07bde218aaf6fe754f628f14115b8310796a97a1b"} Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.892491 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-697b589695-vqq6h"] Feb 18 19:54:20 crc kubenswrapper[4932]: E0218 19:54:20.892903 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4c20fc2-cf78-41c9-9e37-c5bea35d472f" containerName="neutron-db-sync" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.892919 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4c20fc2-cf78-41c9-9e37-c5bea35d472f" containerName="neutron-db-sync" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.893131 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4c20fc2-cf78-41c9-9e37-c5bea35d472f" containerName="neutron-db-sync" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.894131 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.901808 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-697b589695-vqq6h"] Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.904778 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kfzmp" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.905737 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kfzmp" event={"ID":"c4c20fc2-cf78-41c9-9e37-c5bea35d472f","Type":"ContainerDied","Data":"d6c505a399db7407167ba85b30249143bd9bde443aac40b322a8f403af6c7869"} Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.905794 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6c505a399db7407167ba85b30249143bd9bde443aac40b322a8f403af6c7869" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.983348 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-config\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.983502 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-ovsdbserver-sb\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.983607 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-dns-swift-storage-0\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.983828 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-dns-svc\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.984252 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svh7b\" (UniqueName: \"kubernetes.io/projected/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-kube-api-access-svh7b\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.984374 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-ovsdbserver-nb\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.987926 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5966846f96-hbrsw"] Feb 18 19:54:20 crc kubenswrapper[4932]: I0218 19:54:20.989596 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.002197 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.003270 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.003424 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-rp826" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.003534 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.010742 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5966846f96-hbrsw"] Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.087332 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-config\") pod \"neutron-5966846f96-hbrsw\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.087376 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svh7b\" (UniqueName: \"kubernetes.io/projected/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-kube-api-access-svh7b\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.087420 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-ovsdbserver-nb\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.087453 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d54tn\" (UniqueName: \"kubernetes.io/projected/fb1c0405-2770-4a03-ba51-c78005d57ad9-kube-api-access-d54tn\") pod \"neutron-5966846f96-hbrsw\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.087520 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-config\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.087561 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-ovsdbserver-sb\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.087580 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-ovndb-tls-certs\") pod \"neutron-5966846f96-hbrsw\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.087597 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-dns-swift-storage-0\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.087643 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-httpd-config\") pod \"neutron-5966846f96-hbrsw\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.087682 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-dns-svc\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.087729 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-combined-ca-bundle\") pod \"neutron-5966846f96-hbrsw\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.089096 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-ovsdbserver-nb\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.089945 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-dns-swift-storage-0\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.090521 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-dns-svc\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.091292 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-ovsdbserver-sb\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.103049 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-config\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.120875 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svh7b\" (UniqueName: \"kubernetes.io/projected/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-kube-api-access-svh7b\") pod \"dnsmasq-dns-697b589695-vqq6h\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.147916 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.189584 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-config\") pod \"neutron-5966846f96-hbrsw\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.189654 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d54tn\" (UniqueName: \"kubernetes.io/projected/fb1c0405-2770-4a03-ba51-c78005d57ad9-kube-api-access-d54tn\") pod \"neutron-5966846f96-hbrsw\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.189723 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-ovndb-tls-certs\") pod \"neutron-5966846f96-hbrsw\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.189763 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-httpd-config\") pod \"neutron-5966846f96-hbrsw\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.189841 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-combined-ca-bundle\") pod \"neutron-5966846f96-hbrsw\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.194920 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58b1eaea-5735-4c71-9c13-83bbece4cb4a" path="/var/lib/kubelet/pods/58b1eaea-5735-4c71-9c13-83bbece4cb4a/volumes" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.198441 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-combined-ca-bundle\") pod \"neutron-5966846f96-hbrsw\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.202967 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-config\") pod \"neutron-5966846f96-hbrsw\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.204221 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-httpd-config\") pod \"neutron-5966846f96-hbrsw\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.206724 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d54tn\" (UniqueName: \"kubernetes.io/projected/fb1c0405-2770-4a03-ba51-c78005d57ad9-kube-api-access-d54tn\") pod \"neutron-5966846f96-hbrsw\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.219982 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-ovndb-tls-certs\") pod \"neutron-5966846f96-hbrsw\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.242418 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.384227 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6877c868f8-jvwwn" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.389968 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.482346 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-75df984768-5mv9k"] Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.556928 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.703375 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-combined-ca-bundle\") pod \"3f831817-b833-4ee3-b1e9-77d9c02416ed\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.703644 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f831817-b833-4ee3-b1e9-77d9c02416ed-etc-machine-id\") pod \"3f831817-b833-4ee3-b1e9-77d9c02416ed\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.703727 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqllv\" (UniqueName: \"kubernetes.io/projected/3f831817-b833-4ee3-b1e9-77d9c02416ed-kube-api-access-qqllv\") pod \"3f831817-b833-4ee3-b1e9-77d9c02416ed\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.703755 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-config-data\") pod \"3f831817-b833-4ee3-b1e9-77d9c02416ed\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.703841 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-db-sync-config-data\") pod \"3f831817-b833-4ee3-b1e9-77d9c02416ed\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.703841 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f831817-b833-4ee3-b1e9-77d9c02416ed-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3f831817-b833-4ee3-b1e9-77d9c02416ed" (UID: "3f831817-b833-4ee3-b1e9-77d9c02416ed"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.703947 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-scripts\") pod \"3f831817-b833-4ee3-b1e9-77d9c02416ed\" (UID: \"3f831817-b833-4ee3-b1e9-77d9c02416ed\") " Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.704368 4932 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f831817-b833-4ee3-b1e9-77d9c02416ed-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.714375 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3f831817-b833-4ee3-b1e9-77d9c02416ed" (UID: "3f831817-b833-4ee3-b1e9-77d9c02416ed"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.714465 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f831817-b833-4ee3-b1e9-77d9c02416ed-kube-api-access-qqllv" (OuterVolumeSpecName: "kube-api-access-qqllv") pod "3f831817-b833-4ee3-b1e9-77d9c02416ed" (UID: "3f831817-b833-4ee3-b1e9-77d9c02416ed"). InnerVolumeSpecName "kube-api-access-qqllv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.714476 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-scripts" (OuterVolumeSpecName: "scripts") pod "3f831817-b833-4ee3-b1e9-77d9c02416ed" (UID: "3f831817-b833-4ee3-b1e9-77d9c02416ed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.753683 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f831817-b833-4ee3-b1e9-77d9c02416ed" (UID: "3f831817-b833-4ee3-b1e9-77d9c02416ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.778431 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-config-data" (OuterVolumeSpecName: "config-data") pod "3f831817-b833-4ee3-b1e9-77d9c02416ed" (UID: "3f831817-b833-4ee3-b1e9-77d9c02416ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.808368 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqllv\" (UniqueName: \"kubernetes.io/projected/3f831817-b833-4ee3-b1e9-77d9c02416ed-kube-api-access-qqllv\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.808400 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.808409 4932 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.808424 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.808435 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f831817-b833-4ee3-b1e9-77d9c02416ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.955199 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:54:21 crc kubenswrapper[4932]: E0218 19:54:21.955775 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f831817-b833-4ee3-b1e9-77d9c02416ed" containerName="cinder-db-sync" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.955790 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f831817-b833-4ee3-b1e9-77d9c02416ed" containerName="cinder-db-sync" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.955981 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f831817-b833-4ee3-b1e9-77d9c02416ed" containerName="cinder-db-sync" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.956946 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.966165 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.976243 4932 generic.go:334] "Generic (PLEG): container finished" podID="d6a2f5f7-e711-48ad-9455-4c9591d751a4" containerID="e2da060f7790ff93315a845bafd7b63811c1d420398b145f60768509ea598a27" exitCode=0 Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.976307 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" event={"ID":"d6a2f5f7-e711-48ad-9455-4c9591d751a4","Type":"ContainerDied","Data":"e2da060f7790ff93315a845bafd7b63811c1d420398b145f60768509ea598a27"} Feb 18 19:54:21 crc kubenswrapper[4932]: I0218 19:54:21.978094 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.021415 4932 generic.go:334] "Generic (PLEG): container finished" podID="079e3d7d-bd4f-4198-8606-95192a514c07" containerID="2b1e75d16cb30a9c6ffb3c5157c9587182f8a699106125481032c4efb8da098d" exitCode=0 Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.021447 4932 generic.go:334] "Generic (PLEG): container finished" podID="079e3d7d-bd4f-4198-8606-95192a514c07" containerID="41bbf0004efdd834aa04334d49096cca68b41c5d9f117836f2c8dc6fe6f5d5be" exitCode=2 Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.021513 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"079e3d7d-bd4f-4198-8606-95192a514c07","Type":"ContainerDied","Data":"2b1e75d16cb30a9c6ffb3c5157c9587182f8a699106125481032c4efb8da098d"} Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.021540 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"079e3d7d-bd4f-4198-8606-95192a514c07","Type":"ContainerDied","Data":"41bbf0004efdd834aa04334d49096cca68b41c5d9f117836f2c8dc6fe6f5d5be"} Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.052870 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-697b589695-vqq6h"] Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.110502 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7449c5884b-q9l4k" event={"ID":"505f490e-dca8-49ae-aeeb-3392c065d841","Type":"ContainerStarted","Data":"7df07bd853489d447f256acaae8700635b716bd7ed59696363bdaa6d7cf3ee38"} Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.110562 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7449c5884b-q9l4k" event={"ID":"505f490e-dca8-49ae-aeeb-3392c065d841","Type":"ContainerStarted","Data":"98b2099bab8a6e146a0799442e00594a5328d749f9417d06bb347ef9fb18f009"} Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.110972 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.111012 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.121337 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/08fb57b1-f237-4913-8897-a21202273268-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.121432 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-scripts\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.121490 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfcnj\" (UniqueName: \"kubernetes.io/projected/08fb57b1-f237-4913-8897-a21202273268-kube-api-access-lfcnj\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.121509 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.121560 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.121591 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-config-data\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.156268 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-697b589695-vqq6h"] Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.182499 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"3bab1a8c-1512-4353-90c0-b145865fc593","Type":"ContainerStarted","Data":"b39b378463df4659a7d815ad559e055320c381d547ead7d0839359df1016468c"} Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.182548 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"3bab1a8c-1512-4353-90c0-b145865fc593","Type":"ContainerStarted","Data":"d18571c0d5c05e581ab7e9d6c5c54075a1b2cf346cc6716193737fc498f14d6c"} Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.182561 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"3bab1a8c-1512-4353-90c0-b145865fc593","Type":"ContainerStarted","Data":"0e7eb3e8f305c855ff6ec62060ea2b4e3728920ab8aae8cac868ca003e3590f4"} Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.183367 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.209574 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="3bab1a8c-1512-4353-90c0-b145865fc593" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.178:9322/\": dial tcp 10.217.0.178:9322: connect: connection refused" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.228615 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfcnj\" (UniqueName: \"kubernetes.io/projected/08fb57b1-f237-4913-8897-a21202273268-kube-api-access-lfcnj\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.228684 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.228793 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.228859 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-config-data\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.228913 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/08fb57b1-f237-4913-8897-a21202273268-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.229036 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-scripts\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.239286 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nqxxn" event={"ID":"3f831817-b833-4ee3-b1e9-77d9c02416ed","Type":"ContainerDied","Data":"e5dc27f7492f1faa0455250ffd7868de8258df87b7d776e52911e76784a162ec"} Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.239352 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5dc27f7492f1faa0455250ffd7868de8258df87b7d776e52911e76784a162ec" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.239517 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nqxxn" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.243188 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-75df984768-5mv9k" podUID="dec0e208-2bfc-4661-8395-c56418bb9307" containerName="horizon" containerID="cri-o://c14c2db9c2e97146ded5c1be64f375a20e4d3dc8027f2eb556b8226700b572e9" gracePeriod=30 Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.238167 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-75df984768-5mv9k" podUID="dec0e208-2bfc-4661-8395-c56418bb9307" containerName="horizon-log" containerID="cri-o://8938c10b66b4f6d7e20437bee59ce3c16a7181c0a809f3e865b01b219862d8d7" gracePeriod=30 Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.273922 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/08fb57b1-f237-4913-8897-a21202273268-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.288309 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.292598 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.295696 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-scripts\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.301843 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-855cb46c75-kwghr"] Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.307511 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.309063 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-config-data\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.335271 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfcnj\" (UniqueName: \"kubernetes.io/projected/08fb57b1-f237-4913-8897-a21202273268-kube-api-access-lfcnj\") pod \"cinder-scheduler-0\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.385472 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-855cb46c75-kwghr"] Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.396302 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7449c5884b-q9l4k" podStartSLOduration=3.396275381 podStartE2EDuration="3.396275381s" podCreationTimestamp="2026-02-18 19:54:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:22.153931474 +0000 UTC m=+1225.735886319" watchObservedRunningTime="2026-02-18 19:54:22.396275381 +0000 UTC m=+1225.978230246" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.421836 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=3.42181758 podStartE2EDuration="3.42181758s" podCreationTimestamp="2026-02-18 19:54:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:22.260968319 +0000 UTC m=+1225.842923164" watchObservedRunningTime="2026-02-18 19:54:22.42181758 +0000 UTC m=+1226.003772415" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.439668 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.441302 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.443663 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9qmk\" (UniqueName: \"kubernetes.io/projected/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-kube-api-access-b9qmk\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.443885 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-dns-swift-storage-0\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.443921 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-config\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.444014 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-ovsdbserver-nb\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.444035 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-ovsdbserver-sb\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.444096 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-dns-svc\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.448448 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.454478 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5966846f96-hbrsw"] Feb 18 19:54:22 crc kubenswrapper[4932]: E0218 19:54:22.460315 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:22 crc kubenswrapper[4932]: E0218 19:54:22.469291 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.469441 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:54:22 crc kubenswrapper[4932]: E0218 19:54:22.477947 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:22 crc kubenswrapper[4932]: E0218 19:54:22.478007 4932 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="5bd90883-79db-4903-87ab-828b9608f9fa" containerName="watcher-applier" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.511542 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.545332 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-dns-swift-storage-0\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.545369 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-config\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.545391 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdllf\" (UniqueName: \"kubernetes.io/projected/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-kube-api-access-wdllf\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.545417 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.545441 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-ovsdbserver-nb\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.545457 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-ovsdbserver-sb\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.545480 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-logs\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.545500 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-dns-svc\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.545536 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-scripts\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.545553 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-config-data-custom\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.545579 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9qmk\" (UniqueName: \"kubernetes.io/projected/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-kube-api-access-b9qmk\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.545627 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-config-data\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.545651 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-etc-machine-id\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.546398 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-dns-swift-storage-0\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.546892 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-config\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.552426 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-ovsdbserver-nb\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.552989 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-ovsdbserver-sb\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.553502 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-dns-svc\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.586812 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9qmk\" (UniqueName: \"kubernetes.io/projected/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-kube-api-access-b9qmk\") pod \"dnsmasq-dns-855cb46c75-kwghr\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.653273 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-config-data\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.653332 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-etc-machine-id\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.653485 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdllf\" (UniqueName: \"kubernetes.io/projected/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-kube-api-access-wdllf\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.653516 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.653550 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-logs\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.653597 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-scripts\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.653617 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-config-data-custom\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.657298 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-etc-machine-id\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.657643 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-logs\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.665739 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.671913 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-config-data-custom\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.672997 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-config-data\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.675498 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-scripts\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.683998 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdllf\" (UniqueName: \"kubernetes.io/projected/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-kube-api-access-wdllf\") pod \"cinder-api-0\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " pod="openstack/cinder-api-0" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.783503 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:22 crc kubenswrapper[4932]: I0218 19:54:22.816349 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:54:23 crc kubenswrapper[4932]: W0218 19:54:23.083750 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb1c0405_2770_4a03_ba51_c78005d57ad9.slice/crio-b0f1a3c159b5f59fb68d5b1503c08f8c96ed4a7d57c2077fe1e9116b9b2fbf3b WatchSource:0}: Error finding container b0f1a3c159b5f59fb68d5b1503c08f8c96ed4a7d57c2077fe1e9116b9b2fbf3b: Status 404 returned error can't find the container with id b0f1a3c159b5f59fb68d5b1503c08f8c96ed4a7d57c2077fe1e9116b9b2fbf3b Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.251955 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5966846f96-hbrsw" event={"ID":"fb1c0405-2770-4a03-ba51-c78005d57ad9","Type":"ContainerStarted","Data":"b0f1a3c159b5f59fb68d5b1503c08f8c96ed4a7d57c2077fe1e9116b9b2fbf3b"} Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.694350 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.775857 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-ovsdbserver-sb\") pod \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.775900 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-config\") pod \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.775980 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-ovsdbserver-nb\") pod \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.776030 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxn82\" (UniqueName: \"kubernetes.io/projected/d6a2f5f7-e711-48ad-9455-4c9591d751a4-kube-api-access-kxn82\") pod \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.776150 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-dns-svc\") pod \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.776200 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-dns-swift-storage-0\") pod \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\" (UID: \"d6a2f5f7-e711-48ad-9455-4c9591d751a4\") " Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.781922 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6a2f5f7-e711-48ad-9455-4c9591d751a4-kube-api-access-kxn82" (OuterVolumeSpecName: "kube-api-access-kxn82") pod "d6a2f5f7-e711-48ad-9455-4c9591d751a4" (UID: "d6a2f5f7-e711-48ad-9455-4c9591d751a4"). InnerVolumeSpecName "kube-api-access-kxn82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.804510 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d6a2f5f7-e711-48ad-9455-4c9591d751a4" (UID: "d6a2f5f7-e711-48ad-9455-4c9591d751a4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.805239 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d6a2f5f7-e711-48ad-9455-4c9591d751a4" (UID: "d6a2f5f7-e711-48ad-9455-4c9591d751a4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.812638 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-config" (OuterVolumeSpecName: "config") pod "d6a2f5f7-e711-48ad-9455-4c9591d751a4" (UID: "d6a2f5f7-e711-48ad-9455-4c9591d751a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.821754 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d6a2f5f7-e711-48ad-9455-4c9591d751a4" (UID: "d6a2f5f7-e711-48ad-9455-4c9591d751a4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.822404 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d6a2f5f7-e711-48ad-9455-4c9591d751a4" (UID: "d6a2f5f7-e711-48ad-9455-4c9591d751a4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.889477 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.889686 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.889696 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.889707 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxn82\" (UniqueName: \"kubernetes.io/projected/d6a2f5f7-e711-48ad-9455-4c9591d751a4-kube-api-access-kxn82\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.889717 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:23 crc kubenswrapper[4932]: I0218 19:54:23.889725 4932 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6a2f5f7-e711-48ad-9455-4c9591d751a4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.282407 4932 generic.go:334] "Generic (PLEG): container finished" podID="0882c686-1b07-4ac7-a6be-148eff7faa19" containerID="a6a123d69d1a4e46268f089397aa0f2920ef5932f2828721d8716a7b45e1e942" exitCode=1 Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.282829 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0882c686-1b07-4ac7-a6be-148eff7faa19","Type":"ContainerDied","Data":"a6a123d69d1a4e46268f089397aa0f2920ef5932f2828721d8716a7b45e1e942"} Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.282871 4932 scope.go:117] "RemoveContainer" containerID="4de093a700139da46d3f66815f5051f7a579a847fbfe3c9c9fef66a2d56e8e8c" Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.283694 4932 scope.go:117] "RemoveContainer" containerID="a6a123d69d1a4e46268f089397aa0f2920ef5932f2828721d8716a7b45e1e942" Feb 18 19:54:24 crc kubenswrapper[4932]: E0218 19:54:24.283990 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(0882c686-1b07-4ac7-a6be-148eff7faa19)\"" pod="openstack/watcher-decision-engine-0" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" Feb 18 19:54:24 crc kubenswrapper[4932]: W0218 19:54:24.288720 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebab9a68_9ab1_4d04_84ec_9f54b1e6e616.slice/crio-abf4fe1aeef8ebb3bf6d40f6b972486e6ef67f658fa19698f1bd32267dd142b9 WatchSource:0}: Error finding container abf4fe1aeef8ebb3bf6d40f6b972486e6ef67f658fa19698f1bd32267dd142b9: Status 404 returned error can't find the container with id abf4fe1aeef8ebb3bf6d40f6b972486e6ef67f658fa19698f1bd32267dd142b9 Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.291405 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-69669cb55f-sp2x2" event={"ID":"05e25333-fed2-4944-8c7e-151c0bd6ab6c","Type":"ContainerStarted","Data":"72119b86a375ae2c811dba508a69261ce4b7198d03c91fdf10bcd82e870b617f"} Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.305320 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" event={"ID":"6b01fc13-e894-46fd-8f24-d9ccdbce09e0","Type":"ContainerStarted","Data":"320475adc150dd4aac637b0e3a86249fd4a7cd7866314ced895ac0f66a8016d7"} Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.307082 4932 generic.go:334] "Generic (PLEG): container finished" podID="2c9f2985-c3be-4e9c-a12a-1bae71d1bcea" containerID="780645213b3d6509f2ae57179e0c778515aafd9091ab4640640a6222945146e1" exitCode=0 Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.307158 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-697b589695-vqq6h" event={"ID":"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea","Type":"ContainerDied","Data":"780645213b3d6509f2ae57179e0c778515aafd9091ab4640640a6222945146e1"} Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.310202 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-855cb46c75-kwghr"] Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.310237 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-697b589695-vqq6h" event={"ID":"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea","Type":"ContainerStarted","Data":"2544e4221da69a4d78f85e7f0d63e78abed137750dbac8c78c132c0ee3b4a87d"} Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.324682 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.330610 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5966846f96-hbrsw" event={"ID":"fb1c0405-2770-4a03-ba51-c78005d57ad9","Type":"ContainerStarted","Data":"8814276ab26396b2ae50e791faa4b0371fd71f6cc12e32306c2bd981b3b56f5f"} Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.338666 4932 generic.go:334] "Generic (PLEG): container finished" podID="dec0e208-2bfc-4661-8395-c56418bb9307" containerID="c14c2db9c2e97146ded5c1be64f375a20e4d3dc8027f2eb556b8226700b572e9" exitCode=0 Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.338755 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75df984768-5mv9k" event={"ID":"dec0e208-2bfc-4661-8395-c56418bb9307","Type":"ContainerDied","Data":"c14c2db9c2e97146ded5c1be64f375a20e4d3dc8027f2eb556b8226700b572e9"} Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.340810 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" event={"ID":"d6a2f5f7-e711-48ad-9455-4c9591d751a4","Type":"ContainerDied","Data":"80607fa7dffc51679b1e994f65af38dcef63e402a63bb96a1efa6d78960754ca"} Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.340893 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d4f4bc8df-ddvv7" Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.459296 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:54:24 crc kubenswrapper[4932]: W0218 19:54:24.493509 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08fb57b1_f237_4913_8897_a21202273268.slice/crio-7e142471735b8d8ede9ef15b6d6b2ffab0ed91de871dc5c56cbedc4c3564c6af WatchSource:0}: Error finding container 7e142471735b8d8ede9ef15b6d6b2ffab0ed91de871dc5c56cbedc4c3564c6af: Status 404 returned error can't find the container with id 7e142471735b8d8ede9ef15b6d6b2ffab0ed91de871dc5c56cbedc4c3564c6af Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.511865 4932 scope.go:117] "RemoveContainer" containerID="e2da060f7790ff93315a845bafd7b63811c1d420398b145f60768509ea598a27" Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.607255 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d4f4bc8df-ddvv7"] Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.629652 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d4f4bc8df-ddvv7"] Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.809383 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.938420 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-config\") pod \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.938482 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svh7b\" (UniqueName: \"kubernetes.io/projected/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-kube-api-access-svh7b\") pod \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.938570 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-ovsdbserver-sb\") pod \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.938710 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-ovsdbserver-nb\") pod \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.938733 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-dns-svc\") pod \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.938810 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-dns-swift-storage-0\") pod \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\" (UID: \"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea\") " Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.943979 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-kube-api-access-svh7b" (OuterVolumeSpecName: "kube-api-access-svh7b") pod "2c9f2985-c3be-4e9c-a12a-1bae71d1bcea" (UID: "2c9f2985-c3be-4e9c-a12a-1bae71d1bcea"). InnerVolumeSpecName "kube-api-access-svh7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.964011 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2c9f2985-c3be-4e9c-a12a-1bae71d1bcea" (UID: "2c9f2985-c3be-4e9c-a12a-1bae71d1bcea"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.967573 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-config" (OuterVolumeSpecName: "config") pod "2c9f2985-c3be-4e9c-a12a-1bae71d1bcea" (UID: "2c9f2985-c3be-4e9c-a12a-1bae71d1bcea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.982002 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2c9f2985-c3be-4e9c-a12a-1bae71d1bcea" (UID: "2c9f2985-c3be-4e9c-a12a-1bae71d1bcea"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:24 crc kubenswrapper[4932]: I0218 19:54:24.985629 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2c9f2985-c3be-4e9c-a12a-1bae71d1bcea" (UID: "2c9f2985-c3be-4e9c-a12a-1bae71d1bcea"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.010757 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2c9f2985-c3be-4e9c-a12a-1bae71d1bcea" (UID: "2c9f2985-c3be-4e9c-a12a-1bae71d1bcea"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.040844 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.040868 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.040876 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.040886 4932 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.040895 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.040903 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svh7b\" (UniqueName: \"kubernetes.io/projected/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea-kube-api-access-svh7b\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.109714 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.201435 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6a2f5f7-e711-48ad-9455-4c9591d751a4" path="/var/lib/kubelet/pods/d6a2f5f7-e711-48ad-9455-4c9591d751a4/volumes" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.375143 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"08fb57b1-f237-4913-8897-a21202273268","Type":"ContainerStarted","Data":"7e142471735b8d8ede9ef15b6d6b2ffab0ed91de871dc5c56cbedc4c3564c6af"} Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.380443 4932 generic.go:334] "Generic (PLEG): container finished" podID="079e3d7d-bd4f-4198-8606-95192a514c07" containerID="c7ee5732776c18a927c72c5ff1cc708a0c4c7cbb7be39c25d6f15f19eb006153" exitCode=0 Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.380519 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"079e3d7d-bd4f-4198-8606-95192a514c07","Type":"ContainerDied","Data":"c7ee5732776c18a927c72c5ff1cc708a0c4c7cbb7be39c25d6f15f19eb006153"} Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.397577 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" event={"ID":"6b01fc13-e894-46fd-8f24-d9ccdbce09e0","Type":"ContainerStarted","Data":"5f663127e38a953cd3de0606d0bf65e582824975b52335bdeb26c0e4505ad974"} Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.409515 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-697b589695-vqq6h" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.410264 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-697b589695-vqq6h" event={"ID":"2c9f2985-c3be-4e9c-a12a-1bae71d1bcea","Type":"ContainerDied","Data":"2544e4221da69a4d78f85e7f0d63e78abed137750dbac8c78c132c0ee3b4a87d"} Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.410296 4932 scope.go:117] "RemoveContainer" containerID="780645213b3d6509f2ae57179e0c778515aafd9091ab4640640a6222945146e1" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.420703 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5c5bd6cc9b-42nrm" podStartSLOduration=2.988441068 podStartE2EDuration="6.420688105s" podCreationTimestamp="2026-02-18 19:54:19 +0000 UTC" firstStartedPulling="2026-02-18 19:54:20.39007905 +0000 UTC m=+1223.972033895" lastFinishedPulling="2026-02-18 19:54:23.822326077 +0000 UTC m=+1227.404280932" observedRunningTime="2026-02-18 19:54:25.411302454 +0000 UTC m=+1228.993257289" watchObservedRunningTime="2026-02-18 19:54:25.420688105 +0000 UTC m=+1229.002642950" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.444151 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5966846f96-hbrsw" event={"ID":"fb1c0405-2770-4a03-ba51-c78005d57ad9","Type":"ContainerStarted","Data":"44de70ea8243ac6a26a85999f9986d4709163f4632edfdc74e832972fac2ff09"} Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.444973 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.463358 4932 generic.go:334] "Generic (PLEG): container finished" podID="ebab9a68-9ab1-4d04-84ec-9f54b1e6e616" containerID="b1963dc8bdedaa6e9c39260e4aa454ec9b1f122ff3e931be78b28e85782c2717" exitCode=0 Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.463448 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-855cb46c75-kwghr" event={"ID":"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616","Type":"ContainerDied","Data":"b1963dc8bdedaa6e9c39260e4aa454ec9b1f122ff3e931be78b28e85782c2717"} Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.463477 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-855cb46c75-kwghr" event={"ID":"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616","Type":"ContainerStarted","Data":"abf4fe1aeef8ebb3bf6d40f6b972486e6ef67f658fa19698f1bd32267dd142b9"} Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.498233 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-697b589695-vqq6h"] Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.527220 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-697b589695-vqq6h"] Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.546604 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5966846f96-hbrsw" podStartSLOduration=5.546584565 podStartE2EDuration="5.546584565s" podCreationTimestamp="2026-02-18 19:54:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:25.524223894 +0000 UTC m=+1229.106178749" watchObservedRunningTime="2026-02-18 19:54:25.546584565 +0000 UTC m=+1229.128539400" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.560426 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-69669cb55f-sp2x2" event={"ID":"05e25333-fed2-4944-8c7e-151c0bd6ab6c","Type":"ContainerStarted","Data":"e72c600997093359029736ce17b3968d1b22e8dfb4825143cd4a61465c27edf8"} Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.590981 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"30bd9d4f-e84f-4320-9057-80d3d53f7ebb","Type":"ContainerStarted","Data":"00c41dbe58ad3dc460e41a4f8f86809ef9204f330e62756cf3eed317cf475042"} Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.593880 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-69669cb55f-sp2x2" podStartSLOduration=3.183690635 podStartE2EDuration="6.593866939s" podCreationTimestamp="2026-02-18 19:54:19 +0000 UTC" firstStartedPulling="2026-02-18 19:54:20.387315312 +0000 UTC m=+1223.969270157" lastFinishedPulling="2026-02-18 19:54:23.797491596 +0000 UTC m=+1227.379446461" observedRunningTime="2026-02-18 19:54:25.591334857 +0000 UTC m=+1229.173289712" watchObservedRunningTime="2026-02-18 19:54:25.593866939 +0000 UTC m=+1229.175821784" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.643821 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.756936 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-sg-core-conf-yaml\") pod \"079e3d7d-bd4f-4198-8606-95192a514c07\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.756998 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-scripts\") pod \"079e3d7d-bd4f-4198-8606-95192a514c07\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.757141 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/079e3d7d-bd4f-4198-8606-95192a514c07-log-httpd\") pod \"079e3d7d-bd4f-4198-8606-95192a514c07\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.757185 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgvx4\" (UniqueName: \"kubernetes.io/projected/079e3d7d-bd4f-4198-8606-95192a514c07-kube-api-access-xgvx4\") pod \"079e3d7d-bd4f-4198-8606-95192a514c07\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.757243 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/079e3d7d-bd4f-4198-8606-95192a514c07-run-httpd\") pod \"079e3d7d-bd4f-4198-8606-95192a514c07\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.757270 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-combined-ca-bundle\") pod \"079e3d7d-bd4f-4198-8606-95192a514c07\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.757329 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-config-data\") pod \"079e3d7d-bd4f-4198-8606-95192a514c07\" (UID: \"079e3d7d-bd4f-4198-8606-95192a514c07\") " Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.759438 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/079e3d7d-bd4f-4198-8606-95192a514c07-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "079e3d7d-bd4f-4198-8606-95192a514c07" (UID: "079e3d7d-bd4f-4198-8606-95192a514c07"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.759508 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/079e3d7d-bd4f-4198-8606-95192a514c07-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "079e3d7d-bd4f-4198-8606-95192a514c07" (UID: "079e3d7d-bd4f-4198-8606-95192a514c07"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.763309 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/079e3d7d-bd4f-4198-8606-95192a514c07-kube-api-access-xgvx4" (OuterVolumeSpecName: "kube-api-access-xgvx4") pod "079e3d7d-bd4f-4198-8606-95192a514c07" (UID: "079e3d7d-bd4f-4198-8606-95192a514c07"). InnerVolumeSpecName "kube-api-access-xgvx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.786049 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-scripts" (OuterVolumeSpecName: "scripts") pod "079e3d7d-bd4f-4198-8606-95192a514c07" (UID: "079e3d7d-bd4f-4198-8606-95192a514c07"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.811277 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "079e3d7d-bd4f-4198-8606-95192a514c07" (UID: "079e3d7d-bd4f-4198-8606-95192a514c07"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.848194 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "079e3d7d-bd4f-4198-8606-95192a514c07" (UID: "079e3d7d-bd4f-4198-8606-95192a514c07"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.867701 4932 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.867740 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.867749 4932 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/079e3d7d-bd4f-4198-8606-95192a514c07-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.867758 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgvx4\" (UniqueName: \"kubernetes.io/projected/079e3d7d-bd4f-4198-8606-95192a514c07-kube-api-access-xgvx4\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.867768 4932 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/079e3d7d-bd4f-4198-8606-95192a514c07-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.867778 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.899025 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-config-data" (OuterVolumeSpecName: "config-data") pod "079e3d7d-bd4f-4198-8606-95192a514c07" (UID: "079e3d7d-bd4f-4198-8606-95192a514c07"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:25 crc kubenswrapper[4932]: I0218 19:54:25.969794 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/079e3d7d-bd4f-4198-8606-95192a514c07-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.112099 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.373998 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.615541 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-855cb46c75-kwghr" event={"ID":"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616","Type":"ContainerStarted","Data":"2dd4d65476d1505ac595577a77e37ccd6902dc5b61d39daf8b0813fba6426e5c"} Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.615813 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.618585 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"08fb57b1-f237-4913-8897-a21202273268","Type":"ContainerStarted","Data":"826d7bde22e7ab6a3d0df6df6da88c633402dd91ebe5f63b969da715cc6463fb"} Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.622179 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"079e3d7d-bd4f-4198-8606-95192a514c07","Type":"ContainerDied","Data":"fee032ffa8aa1dbfcab87d2f666d06dce9f00f11a46c1ed8dccaedd7a3ae0ea4"} Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.622252 4932 scope.go:117] "RemoveContainer" containerID="2b1e75d16cb30a9c6ffb3c5157c9587182f8a699106125481032c4efb8da098d" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.622412 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.626079 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"30bd9d4f-e84f-4320-9057-80d3d53f7ebb","Type":"ContainerStarted","Data":"ef6f599a3d418b9b543c98edcdf9f0f0c968498f5968cc9b3a5e3260ba0ced73"} Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.626192 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"30bd9d4f-e84f-4320-9057-80d3d53f7ebb","Type":"ContainerStarted","Data":"b586d7053bf31a7678ef91de08a0a0dd40541c6b68d24d82d104e9ca9533195b"} Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.626362 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="30bd9d4f-e84f-4320-9057-80d3d53f7ebb" containerName="cinder-api-log" containerID="cri-o://b586d7053bf31a7678ef91de08a0a0dd40541c6b68d24d82d104e9ca9533195b" gracePeriod=30 Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.626669 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.627034 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="30bd9d4f-e84f-4320-9057-80d3d53f7ebb" containerName="cinder-api" containerID="cri-o://ef6f599a3d418b9b543c98edcdf9f0f0c968498f5968cc9b3a5e3260ba0ced73" gracePeriod=30 Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.636667 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-855cb46c75-kwghr" podStartSLOduration=4.636647927 podStartE2EDuration="4.636647927s" podCreationTimestamp="2026-02-18 19:54:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:26.636240897 +0000 UTC m=+1230.218195742" watchObservedRunningTime="2026-02-18 19:54:26.636647927 +0000 UTC m=+1230.218602772" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.688594 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.688575036 podStartE2EDuration="4.688575036s" podCreationTimestamp="2026-02-18 19:54:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:26.661489389 +0000 UTC m=+1230.243444224" watchObservedRunningTime="2026-02-18 19:54:26.688575036 +0000 UTC m=+1230.270529881" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.716254 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.722672 4932 scope.go:117] "RemoveContainer" containerID="41bbf0004efdd834aa04334d49096cca68b41c5d9f117836f2c8dc6fe6f5d5be" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.725411 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.736188 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:54:26 crc kubenswrapper[4932]: E0218 19:54:26.736546 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a2f5f7-e711-48ad-9455-4c9591d751a4" containerName="init" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.736565 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a2f5f7-e711-48ad-9455-4c9591d751a4" containerName="init" Feb 18 19:54:26 crc kubenswrapper[4932]: E0218 19:54:26.736584 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="079e3d7d-bd4f-4198-8606-95192a514c07" containerName="ceilometer-notification-agent" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.736592 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="079e3d7d-bd4f-4198-8606-95192a514c07" containerName="ceilometer-notification-agent" Feb 18 19:54:26 crc kubenswrapper[4932]: E0218 19:54:26.736605 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="079e3d7d-bd4f-4198-8606-95192a514c07" containerName="proxy-httpd" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.736611 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="079e3d7d-bd4f-4198-8606-95192a514c07" containerName="proxy-httpd" Feb 18 19:54:26 crc kubenswrapper[4932]: E0218 19:54:26.736620 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c9f2985-c3be-4e9c-a12a-1bae71d1bcea" containerName="init" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.736626 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c9f2985-c3be-4e9c-a12a-1bae71d1bcea" containerName="init" Feb 18 19:54:26 crc kubenswrapper[4932]: E0218 19:54:26.736644 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="079e3d7d-bd4f-4198-8606-95192a514c07" containerName="sg-core" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.736649 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="079e3d7d-bd4f-4198-8606-95192a514c07" containerName="sg-core" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.736836 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a2f5f7-e711-48ad-9455-4c9591d751a4" containerName="init" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.736853 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="079e3d7d-bd4f-4198-8606-95192a514c07" containerName="ceilometer-notification-agent" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.736865 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c9f2985-c3be-4e9c-a12a-1bae71d1bcea" containerName="init" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.736884 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="079e3d7d-bd4f-4198-8606-95192a514c07" containerName="sg-core" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.736902 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="079e3d7d-bd4f-4198-8606-95192a514c07" containerName="proxy-httpd" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.739223 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.741586 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.741865 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.773529 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-75df984768-5mv9k" podUID="dec0e208-2bfc-4661-8395-c56418bb9307" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.164:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.164:8443: connect: connection refused" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.776952 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.806229 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f81248d0-bf30-4447-ad78-7bfe9048bbea-run-httpd\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.806303 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f81248d0-bf30-4447-ad78-7bfe9048bbea-log-httpd\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.806324 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-config-data\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.806351 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.806412 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5dzv\" (UniqueName: \"kubernetes.io/projected/f81248d0-bf30-4447-ad78-7bfe9048bbea-kube-api-access-x5dzv\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.806431 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-scripts\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.806464 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.867035 4932 scope.go:117] "RemoveContainer" containerID="c7ee5732776c18a927c72c5ff1cc708a0c4c7cbb7be39c25d6f15f19eb006153" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.907758 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.907876 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f81248d0-bf30-4447-ad78-7bfe9048bbea-run-httpd\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.907952 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f81248d0-bf30-4447-ad78-7bfe9048bbea-log-httpd\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.907983 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-config-data\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.908019 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.908109 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5dzv\" (UniqueName: \"kubernetes.io/projected/f81248d0-bf30-4447-ad78-7bfe9048bbea-kube-api-access-x5dzv\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.908140 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-scripts\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.909194 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f81248d0-bf30-4447-ad78-7bfe9048bbea-log-httpd\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.910135 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f81248d0-bf30-4447-ad78-7bfe9048bbea-run-httpd\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.913866 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.914015 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.914741 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-scripts\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.916748 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-config-data\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:26 crc kubenswrapper[4932]: I0218 19:54:26.935590 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5dzv\" (UniqueName: \"kubernetes.io/projected/f81248d0-bf30-4447-ad78-7bfe9048bbea-kube-api-access-x5dzv\") pod \"ceilometer-0\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " pod="openstack/ceilometer-0" Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.158670 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.204115 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="079e3d7d-bd4f-4198-8606-95192a514c07" path="/var/lib/kubelet/pods/079e3d7d-bd4f-4198-8606-95192a514c07/volumes" Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.211389 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c9f2985-c3be-4e9c-a12a-1bae71d1bcea" path="/var/lib/kubelet/pods/2c9f2985-c3be-4e9c-a12a-1bae71d1bcea/volumes" Feb 18 19:54:27 crc kubenswrapper[4932]: E0218 19:54:27.461298 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:27 crc kubenswrapper[4932]: E0218 19:54:27.471096 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:27 crc kubenswrapper[4932]: E0218 19:54:27.481361 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 19:54:27 crc kubenswrapper[4932]: E0218 19:54:27.481456 4932 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="5bd90883-79db-4903-87ab-828b9608f9fa" containerName="watcher-applier" Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.677532 4932 generic.go:334] "Generic (PLEG): container finished" podID="4938c577-60aa-45c3-9190-b6e82bcf8b0d" containerID="f5769d60f6e01bf4316e0a1d5902b22aaf988b784a78ee3cc62feeec1f37553a" exitCode=137 Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.677568 4932 generic.go:334] "Generic (PLEG): container finished" podID="4938c577-60aa-45c3-9190-b6e82bcf8b0d" containerID="c85e580ee020727173d28e445621bbce2289b58bcee15597e5fb5350c78183fd" exitCode=137 Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.677580 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b4cfbdb9c-hwmr5" event={"ID":"4938c577-60aa-45c3-9190-b6e82bcf8b0d","Type":"ContainerDied","Data":"f5769d60f6e01bf4316e0a1d5902b22aaf988b784a78ee3cc62feeec1f37553a"} Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.677654 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b4cfbdb9c-hwmr5" event={"ID":"4938c577-60aa-45c3-9190-b6e82bcf8b0d","Type":"ContainerDied","Data":"c85e580ee020727173d28e445621bbce2289b58bcee15597e5fb5350c78183fd"} Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.680686 4932 generic.go:334] "Generic (PLEG): container finished" podID="30bd9d4f-e84f-4320-9057-80d3d53f7ebb" containerID="b586d7053bf31a7678ef91de08a0a0dd40541c6b68d24d82d104e9ca9533195b" exitCode=143 Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.680740 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"30bd9d4f-e84f-4320-9057-80d3d53f7ebb","Type":"ContainerDied","Data":"b586d7053bf31a7678ef91de08a0a0dd40541c6b68d24d82d104e9ca9533195b"} Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.683425 4932 generic.go:334] "Generic (PLEG): container finished" podID="a620c48b-58fa-487f-8997-e2784ddc497b" containerID="e80cdd4378af4ac5d4d707a290fa639025fc55be34fd9af1c68a0bd06a7b10c3" exitCode=137 Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.683448 4932 generic.go:334] "Generic (PLEG): container finished" podID="a620c48b-58fa-487f-8997-e2784ddc497b" containerID="97ecd324c61720be922083172bc1852b964c2ee86274e593e6ab59deb4006699" exitCode=137 Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.683484 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67874d8bd5-ff7xc" event={"ID":"a620c48b-58fa-487f-8997-e2784ddc497b","Type":"ContainerDied","Data":"e80cdd4378af4ac5d4d707a290fa639025fc55be34fd9af1c68a0bd06a7b10c3"} Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.683505 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67874d8bd5-ff7xc" event={"ID":"a620c48b-58fa-487f-8997-e2784ddc497b","Type":"ContainerDied","Data":"97ecd324c61720be922083172bc1852b964c2ee86274e593e6ab59deb4006699"} Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.710688 4932 generic.go:334] "Generic (PLEG): container finished" podID="a8b5aede-ac2c-4a2b-ba58-858c9046d8bf" containerID="a824fe0a64ae9746970f5bc8a389ffaa0e7b9eacf3d8dea3f2ebb12195def55c" exitCode=137 Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.710731 4932 generic.go:334] "Generic (PLEG): container finished" podID="a8b5aede-ac2c-4a2b-ba58-858c9046d8bf" containerID="80df06be1d2a603214b5aa7b38525d904a38b1052555a7f95c74bc71722c9961" exitCode=137 Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.710810 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-644d9bbcf7-chs9h" event={"ID":"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf","Type":"ContainerDied","Data":"a824fe0a64ae9746970f5bc8a389ffaa0e7b9eacf3d8dea3f2ebb12195def55c"} Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.710842 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-644d9bbcf7-chs9h" event={"ID":"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf","Type":"ContainerDied","Data":"80df06be1d2a603214b5aa7b38525d904a38b1052555a7f95c74bc71722c9961"} Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.713241 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"08fb57b1-f237-4913-8897-a21202273268","Type":"ContainerStarted","Data":"f4f12776aa0b9f3ee7ffe0d125a1f8a071de01b16c40b8ca0211c7c0d0a4a3e7"} Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.739648 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.739680 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.740520 4932 scope.go:117] "RemoveContainer" containerID="a6a123d69d1a4e46268f089397aa0f2920ef5932f2828721d8716a7b45e1e942" Feb 18 19:54:27 crc kubenswrapper[4932]: E0218 19:54:27.740732 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(0882c686-1b07-4ac7-a6be-148eff7faa19)\"" pod="openstack/watcher-decision-engine-0" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.765409 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.201338976 podStartE2EDuration="6.765383276s" podCreationTimestamp="2026-02-18 19:54:21 +0000 UTC" firstStartedPulling="2026-02-18 19:54:24.511287361 +0000 UTC m=+1228.093242196" lastFinishedPulling="2026-02-18 19:54:25.075331651 +0000 UTC m=+1228.657286496" observedRunningTime="2026-02-18 19:54:27.740334815 +0000 UTC m=+1231.322289670" watchObservedRunningTime="2026-02-18 19:54:27.765383276 +0000 UTC m=+1231.347338141" Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.894421 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.960541 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a620c48b-58fa-487f-8997-e2784ddc497b-config-data\") pod \"a620c48b-58fa-487f-8997-e2784ddc497b\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.960755 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a620c48b-58fa-487f-8997-e2784ddc497b-horizon-secret-key\") pod \"a620c48b-58fa-487f-8997-e2784ddc497b\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.960811 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfb47\" (UniqueName: \"kubernetes.io/projected/a620c48b-58fa-487f-8997-e2784ddc497b-kube-api-access-lfb47\") pod \"a620c48b-58fa-487f-8997-e2784ddc497b\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.960832 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a620c48b-58fa-487f-8997-e2784ddc497b-scripts\") pod \"a620c48b-58fa-487f-8997-e2784ddc497b\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.960870 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a620c48b-58fa-487f-8997-e2784ddc497b-logs\") pod \"a620c48b-58fa-487f-8997-e2784ddc497b\" (UID: \"a620c48b-58fa-487f-8997-e2784ddc497b\") " Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.981746 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a620c48b-58fa-487f-8997-e2784ddc497b-kube-api-access-lfb47" (OuterVolumeSpecName: "kube-api-access-lfb47") pod "a620c48b-58fa-487f-8997-e2784ddc497b" (UID: "a620c48b-58fa-487f-8997-e2784ddc497b"). InnerVolumeSpecName "kube-api-access-lfb47". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.982364 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a620c48b-58fa-487f-8997-e2784ddc497b-logs" (OuterVolumeSpecName: "logs") pod "a620c48b-58fa-487f-8997-e2784ddc497b" (UID: "a620c48b-58fa-487f-8997-e2784ddc497b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:27 crc kubenswrapper[4932]: I0218 19:54:27.996985 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a620c48b-58fa-487f-8997-e2784ddc497b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a620c48b-58fa-487f-8997-e2784ddc497b" (UID: "a620c48b-58fa-487f-8997-e2784ddc497b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.004476 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.020979 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a620c48b-58fa-487f-8997-e2784ddc497b-config-data" (OuterVolumeSpecName: "config-data") pod "a620c48b-58fa-487f-8997-e2784ddc497b" (UID: "a620c48b-58fa-487f-8997-e2784ddc497b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.058664 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a620c48b-58fa-487f-8997-e2784ddc497b-scripts" (OuterVolumeSpecName: "scripts") pod "a620c48b-58fa-487f-8997-e2784ddc497b" (UID: "a620c48b-58fa-487f-8997-e2784ddc497b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.094119 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a620c48b-58fa-487f-8997-e2784ddc497b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.094152 4932 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a620c48b-58fa-487f-8997-e2784ddc497b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.094163 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfb47\" (UniqueName: \"kubernetes.io/projected/a620c48b-58fa-487f-8997-e2784ddc497b-kube-api-access-lfb47\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.094184 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a620c48b-58fa-487f-8997-e2784ddc497b-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.094193 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a620c48b-58fa-487f-8997-e2784ddc497b-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.435240 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.464075 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.604013 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4938c577-60aa-45c3-9190-b6e82bcf8b0d-config-data\") pod \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.604341 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4938c577-60aa-45c3-9190-b6e82bcf8b0d-logs\") pod \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.604397 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-logs\") pod \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.604480 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-horizon-secret-key\") pod \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.604525 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4938c577-60aa-45c3-9190-b6e82bcf8b0d-scripts\") pod \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.604568 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-scripts\") pod \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.604598 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62ssm\" (UniqueName: \"kubernetes.io/projected/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-kube-api-access-62ssm\") pod \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.604626 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4938c577-60aa-45c3-9190-b6e82bcf8b0d-logs" (OuterVolumeSpecName: "logs") pod "4938c577-60aa-45c3-9190-b6e82bcf8b0d" (UID: "4938c577-60aa-45c3-9190-b6e82bcf8b0d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.604646 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rph76\" (UniqueName: \"kubernetes.io/projected/4938c577-60aa-45c3-9190-b6e82bcf8b0d-kube-api-access-rph76\") pod \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.604679 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4938c577-60aa-45c3-9190-b6e82bcf8b0d-horizon-secret-key\") pod \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\" (UID: \"4938c577-60aa-45c3-9190-b6e82bcf8b0d\") " Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.604700 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-config-data\") pod \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\" (UID: \"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf\") " Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.604767 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-logs" (OuterVolumeSpecName: "logs") pod "a8b5aede-ac2c-4a2b-ba58-858c9046d8bf" (UID: "a8b5aede-ac2c-4a2b-ba58-858c9046d8bf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.605137 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4938c577-60aa-45c3-9190-b6e82bcf8b0d-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.605158 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.612915 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-kube-api-access-62ssm" (OuterVolumeSpecName: "kube-api-access-62ssm") pod "a8b5aede-ac2c-4a2b-ba58-858c9046d8bf" (UID: "a8b5aede-ac2c-4a2b-ba58-858c9046d8bf"). InnerVolumeSpecName "kube-api-access-62ssm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.623700 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4938c577-60aa-45c3-9190-b6e82bcf8b0d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "4938c577-60aa-45c3-9190-b6e82bcf8b0d" (UID: "4938c577-60aa-45c3-9190-b6e82bcf8b0d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.623857 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4938c577-60aa-45c3-9190-b6e82bcf8b0d-kube-api-access-rph76" (OuterVolumeSpecName: "kube-api-access-rph76") pod "4938c577-60aa-45c3-9190-b6e82bcf8b0d" (UID: "4938c577-60aa-45c3-9190-b6e82bcf8b0d"). InnerVolumeSpecName "kube-api-access-rph76". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.626185 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a8b5aede-ac2c-4a2b-ba58-858c9046d8bf" (UID: "a8b5aede-ac2c-4a2b-ba58-858c9046d8bf"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.637722 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-scripts" (OuterVolumeSpecName: "scripts") pod "a8b5aede-ac2c-4a2b-ba58-858c9046d8bf" (UID: "a8b5aede-ac2c-4a2b-ba58-858c9046d8bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.659412 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4938c577-60aa-45c3-9190-b6e82bcf8b0d-scripts" (OuterVolumeSpecName: "scripts") pod "4938c577-60aa-45c3-9190-b6e82bcf8b0d" (UID: "4938c577-60aa-45c3-9190-b6e82bcf8b0d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.679822 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-config-data" (OuterVolumeSpecName: "config-data") pod "a8b5aede-ac2c-4a2b-ba58-858c9046d8bf" (UID: "a8b5aede-ac2c-4a2b-ba58-858c9046d8bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.685787 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4938c577-60aa-45c3-9190-b6e82bcf8b0d-config-data" (OuterVolumeSpecName: "config-data") pod "4938c577-60aa-45c3-9190-b6e82bcf8b0d" (UID: "4938c577-60aa-45c3-9190-b6e82bcf8b0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.721080 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rph76\" (UniqueName: \"kubernetes.io/projected/4938c577-60aa-45c3-9190-b6e82bcf8b0d-kube-api-access-rph76\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.721135 4932 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4938c577-60aa-45c3-9190-b6e82bcf8b0d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.721146 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.721155 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4938c577-60aa-45c3-9190-b6e82bcf8b0d-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.721166 4932 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.721252 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4938c577-60aa-45c3-9190-b6e82bcf8b0d-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.721261 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.721269 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62ssm\" (UniqueName: \"kubernetes.io/projected/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf-kube-api-access-62ssm\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.725351 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-57c4489bcf-qchgn"] Feb 18 19:54:28 crc kubenswrapper[4932]: E0218 19:54:28.725707 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4938c577-60aa-45c3-9190-b6e82bcf8b0d" containerName="horizon" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.725723 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="4938c577-60aa-45c3-9190-b6e82bcf8b0d" containerName="horizon" Feb 18 19:54:28 crc kubenswrapper[4932]: E0218 19:54:28.725742 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a620c48b-58fa-487f-8997-e2784ddc497b" containerName="horizon-log" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.725749 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="a620c48b-58fa-487f-8997-e2784ddc497b" containerName="horizon-log" Feb 18 19:54:28 crc kubenswrapper[4932]: E0218 19:54:28.725765 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a620c48b-58fa-487f-8997-e2784ddc497b" containerName="horizon" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.725771 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="a620c48b-58fa-487f-8997-e2784ddc497b" containerName="horizon" Feb 18 19:54:28 crc kubenswrapper[4932]: E0218 19:54:28.725788 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8b5aede-ac2c-4a2b-ba58-858c9046d8bf" containerName="horizon-log" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.725794 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8b5aede-ac2c-4a2b-ba58-858c9046d8bf" containerName="horizon-log" Feb 18 19:54:28 crc kubenswrapper[4932]: E0218 19:54:28.725802 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8b5aede-ac2c-4a2b-ba58-858c9046d8bf" containerName="horizon" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.725808 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8b5aede-ac2c-4a2b-ba58-858c9046d8bf" containerName="horizon" Feb 18 19:54:28 crc kubenswrapper[4932]: E0218 19:54:28.725817 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4938c577-60aa-45c3-9190-b6e82bcf8b0d" containerName="horizon-log" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.725823 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="4938c577-60aa-45c3-9190-b6e82bcf8b0d" containerName="horizon-log" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.725989 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="4938c577-60aa-45c3-9190-b6e82bcf8b0d" containerName="horizon-log" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.726006 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8b5aede-ac2c-4a2b-ba58-858c9046d8bf" containerName="horizon-log" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.726016 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="4938c577-60aa-45c3-9190-b6e82bcf8b0d" containerName="horizon" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.726032 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="a620c48b-58fa-487f-8997-e2784ddc497b" containerName="horizon" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.726043 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="a620c48b-58fa-487f-8997-e2784ddc497b" containerName="horizon-log" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.726049 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8b5aede-ac2c-4a2b-ba58-858c9046d8bf" containerName="horizon" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.728996 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.733493 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.734110 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.761470 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-644d9bbcf7-chs9h" event={"ID":"a8b5aede-ac2c-4a2b-ba58-858c9046d8bf","Type":"ContainerDied","Data":"64958cb64aa641fc969187f742c63571ece0fcc99f90f916c984ba259dcd59e7"} Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.761520 4932 scope.go:117] "RemoveContainer" containerID="a824fe0a64ae9746970f5bc8a389ffaa0e7b9eacf3d8dea3f2ebb12195def55c" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.761646 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-644d9bbcf7-chs9h" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.778218 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-57c4489bcf-qchgn"] Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.786774 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f81248d0-bf30-4447-ad78-7bfe9048bbea","Type":"ContainerStarted","Data":"b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048"} Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.786905 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f81248d0-bf30-4447-ad78-7bfe9048bbea","Type":"ContainerStarted","Data":"bcdb72bd174613995404d4a92c415ade81bee3dfae5093758e2e4468047c8e5f"} Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.802505 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b4cfbdb9c-hwmr5" event={"ID":"4938c577-60aa-45c3-9190-b6e82bcf8b0d","Type":"ContainerDied","Data":"449b65cc6eee0acc18bb77293bfac087ad9d12fb9f06318dfdbe198587c35eda"} Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.802589 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b4cfbdb9c-hwmr5" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.823914 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-config\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.824199 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-combined-ca-bundle\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.824332 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-httpd-config\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.824491 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-internal-tls-certs\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.824636 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-ovndb-tls-certs\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.824747 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-public-tls-certs\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.825059 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjt2s\" (UniqueName: \"kubernetes.io/projected/91d8a414-576a-4c50-990e-3daa2724ecb1-kube-api-access-rjt2s\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.827306 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-644d9bbcf7-chs9h"] Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.829334 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67874d8bd5-ff7xc" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.829474 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67874d8bd5-ff7xc" event={"ID":"a620c48b-58fa-487f-8997-e2784ddc497b","Type":"ContainerDied","Data":"3db1ad470af452257972c4a5c8d1fb2ee8875e24f72fe068e89046c3a5a557ce"} Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.862102 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-644d9bbcf7-chs9h"] Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.884147 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b4cfbdb9c-hwmr5"] Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.900750 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5b4cfbdb9c-hwmr5"] Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.913233 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-67874d8bd5-ff7xc"] Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.926412 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-67874d8bd5-ff7xc"] Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.927000 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-ovndb-tls-certs\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.927038 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-public-tls-certs\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.927060 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjt2s\" (UniqueName: \"kubernetes.io/projected/91d8a414-576a-4c50-990e-3daa2724ecb1-kube-api-access-rjt2s\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.927229 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-config\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.927259 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-combined-ca-bundle\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.927279 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-httpd-config\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.927323 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-internal-tls-certs\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.941914 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-internal-tls-certs\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.942544 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-combined-ca-bundle\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.943160 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-config\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.947059 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-ovndb-tls-certs\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.950909 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-httpd-config\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.957694 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjt2s\" (UniqueName: \"kubernetes.io/projected/91d8a414-576a-4c50-990e-3daa2724ecb1-kube-api-access-rjt2s\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:28 crc kubenswrapper[4932]: I0218 19:54:28.965761 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91d8a414-576a-4c50-990e-3daa2724ecb1-public-tls-certs\") pod \"neutron-57c4489bcf-qchgn\" (UID: \"91d8a414-576a-4c50-990e-3daa2724ecb1\") " pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:29 crc kubenswrapper[4932]: I0218 19:54:29.005205 4932 scope.go:117] "RemoveContainer" containerID="80df06be1d2a603214b5aa7b38525d904a38b1052555a7f95c74bc71722c9961" Feb 18 19:54:29 crc kubenswrapper[4932]: I0218 19:54:29.038835 4932 scope.go:117] "RemoveContainer" containerID="f5769d60f6e01bf4316e0a1d5902b22aaf988b784a78ee3cc62feeec1f37553a" Feb 18 19:54:29 crc kubenswrapper[4932]: I0218 19:54:29.077623 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:29 crc kubenswrapper[4932]: I0218 19:54:29.189973 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4938c577-60aa-45c3-9190-b6e82bcf8b0d" path="/var/lib/kubelet/pods/4938c577-60aa-45c3-9190-b6e82bcf8b0d/volumes" Feb 18 19:54:29 crc kubenswrapper[4932]: I0218 19:54:29.190659 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a620c48b-58fa-487f-8997-e2784ddc497b" path="/var/lib/kubelet/pods/a620c48b-58fa-487f-8997-e2784ddc497b/volumes" Feb 18 19:54:29 crc kubenswrapper[4932]: I0218 19:54:29.191780 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8b5aede-ac2c-4a2b-ba58-858c9046d8bf" path="/var/lib/kubelet/pods/a8b5aede-ac2c-4a2b-ba58-858c9046d8bf/volumes" Feb 18 19:54:29 crc kubenswrapper[4932]: I0218 19:54:29.225406 4932 scope.go:117] "RemoveContainer" containerID="c85e580ee020727173d28e445621bbce2289b58bcee15597e5fb5350c78183fd" Feb 18 19:54:29 crc kubenswrapper[4932]: I0218 19:54:29.332911 4932 scope.go:117] "RemoveContainer" containerID="e80cdd4378af4ac5d4d707a290fa639025fc55be34fd9af1c68a0bd06a7b10c3" Feb 18 19:54:29 crc kubenswrapper[4932]: I0218 19:54:29.671320 4932 scope.go:117] "RemoveContainer" containerID="97ecd324c61720be922083172bc1852b964c2ee86274e593e6ab59deb4006699" Feb 18 19:54:29 crc kubenswrapper[4932]: I0218 19:54:29.871002 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f81248d0-bf30-4447-ad78-7bfe9048bbea","Type":"ContainerStarted","Data":"7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41"} Feb 18 19:54:29 crc kubenswrapper[4932]: I0218 19:54:29.907266 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-57c4489bcf-qchgn"] Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.022707 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-8557cf8c94-8d7qp"] Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.025154 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.049927 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.051031 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.069353 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8557cf8c94-8d7qp"] Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.112687 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.160759 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.193704 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e104e849-d054-4208-8b93-823e82c2627f-public-tls-certs\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.193811 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e104e849-d054-4208-8b93-823e82c2627f-config-data\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.193917 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e104e849-d054-4208-8b93-823e82c2627f-config-data-custom\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.193948 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e104e849-d054-4208-8b93-823e82c2627f-combined-ca-bundle\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.193967 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e104e849-d054-4208-8b93-823e82c2627f-logs\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.194009 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz6vw\" (UniqueName: \"kubernetes.io/projected/e104e849-d054-4208-8b93-823e82c2627f-kube-api-access-cz6vw\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.194031 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e104e849-d054-4208-8b93-823e82c2627f-internal-tls-certs\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.296120 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e104e849-d054-4208-8b93-823e82c2627f-config-data-custom\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.296185 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e104e849-d054-4208-8b93-823e82c2627f-combined-ca-bundle\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.296206 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e104e849-d054-4208-8b93-823e82c2627f-logs\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.296243 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz6vw\" (UniqueName: \"kubernetes.io/projected/e104e849-d054-4208-8b93-823e82c2627f-kube-api-access-cz6vw\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.296265 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e104e849-d054-4208-8b93-823e82c2627f-internal-tls-certs\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.296359 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e104e849-d054-4208-8b93-823e82c2627f-public-tls-certs\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.296390 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e104e849-d054-4208-8b93-823e82c2627f-config-data\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.296652 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e104e849-d054-4208-8b93-823e82c2627f-logs\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.303919 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e104e849-d054-4208-8b93-823e82c2627f-internal-tls-certs\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.305006 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e104e849-d054-4208-8b93-823e82c2627f-public-tls-certs\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.305068 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e104e849-d054-4208-8b93-823e82c2627f-config-data-custom\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.313693 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e104e849-d054-4208-8b93-823e82c2627f-combined-ca-bundle\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.314656 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e104e849-d054-4208-8b93-823e82c2627f-config-data\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.330835 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz6vw\" (UniqueName: \"kubernetes.io/projected/e104e849-d054-4208-8b93-823e82c2627f-kube-api-access-cz6vw\") pod \"barbican-api-8557cf8c94-8d7qp\" (UID: \"e104e849-d054-4208-8b93-823e82c2627f\") " pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.432072 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.726791 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.895127 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f81248d0-bf30-4447-ad78-7bfe9048bbea","Type":"ContainerStarted","Data":"f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09"} Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.899399 4932 generic.go:334] "Generic (PLEG): container finished" podID="5bd90883-79db-4903-87ab-828b9608f9fa" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" exitCode=137 Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.899497 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.900234 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"5bd90883-79db-4903-87ab-828b9608f9fa","Type":"ContainerDied","Data":"fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746"} Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.900255 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"5bd90883-79db-4903-87ab-828b9608f9fa","Type":"ContainerDied","Data":"df7e1feb306b3e43a9f10b16516d4c855aa78c2e70283552aa8d3546e3dee111"} Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.900272 4932 scope.go:117] "RemoveContainer" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.904769 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57c4489bcf-qchgn" event={"ID":"91d8a414-576a-4c50-990e-3daa2724ecb1","Type":"ContainerStarted","Data":"d2b6d0488b17d213b7573339849794ce994907ae0e84f1dc6b70e23a24945529"} Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.904807 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57c4489bcf-qchgn" event={"ID":"91d8a414-576a-4c50-990e-3daa2724ecb1","Type":"ContainerStarted","Data":"2f36790b4e38abbd8d80704e32d6588bbe88a6a1a64652a37c07a7528178cd51"} Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.914669 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bd90883-79db-4903-87ab-828b9608f9fa-logs\") pod \"5bd90883-79db-4903-87ab-828b9608f9fa\" (UID: \"5bd90883-79db-4903-87ab-828b9608f9fa\") " Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.915486 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bd90883-79db-4903-87ab-828b9608f9fa-config-data\") pod \"5bd90883-79db-4903-87ab-828b9608f9fa\" (UID: \"5bd90883-79db-4903-87ab-828b9608f9fa\") " Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.915583 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bd90883-79db-4903-87ab-828b9608f9fa-combined-ca-bundle\") pod \"5bd90883-79db-4903-87ab-828b9608f9fa\" (UID: \"5bd90883-79db-4903-87ab-828b9608f9fa\") " Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.915717 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkk7w\" (UniqueName: \"kubernetes.io/projected/5bd90883-79db-4903-87ab-828b9608f9fa-kube-api-access-jkk7w\") pod \"5bd90883-79db-4903-87ab-828b9608f9fa\" (UID: \"5bd90883-79db-4903-87ab-828b9608f9fa\") " Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.915921 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bd90883-79db-4903-87ab-828b9608f9fa-logs" (OuterVolumeSpecName: "logs") pod "5bd90883-79db-4903-87ab-828b9608f9fa" (UID: "5bd90883-79db-4903-87ab-828b9608f9fa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.917810 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.918337 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bd90883-79db-4903-87ab-828b9608f9fa-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.923877 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bd90883-79db-4903-87ab-828b9608f9fa-kube-api-access-jkk7w" (OuterVolumeSpecName: "kube-api-access-jkk7w") pod "5bd90883-79db-4903-87ab-828b9608f9fa" (UID: "5bd90883-79db-4903-87ab-828b9608f9fa"). InnerVolumeSpecName "kube-api-access-jkk7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.951651 4932 scope.go:117] "RemoveContainer" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" Feb 18 19:54:30 crc kubenswrapper[4932]: E0218 19:54:30.954901 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746\": container with ID starting with fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746 not found: ID does not exist" containerID="fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.955107 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746"} err="failed to get container status \"fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746\": rpc error: code = NotFound desc = could not find container \"fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746\": container with ID starting with fa9a3ac9780702ac89166d02acfbe233f83eb16d4b459149620467f0de423746 not found: ID does not exist" Feb 18 19:54:30 crc kubenswrapper[4932]: I0218 19:54:30.981323 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bd90883-79db-4903-87ab-828b9608f9fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5bd90883-79db-4903-87ab-828b9608f9fa" (UID: "5bd90883-79db-4903-87ab-828b9608f9fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:31 crc kubenswrapper[4932]: I0218 19:54:31.001487 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bd90883-79db-4903-87ab-828b9608f9fa-config-data" (OuterVolumeSpecName: "config-data") pod "5bd90883-79db-4903-87ab-828b9608f9fa" (UID: "5bd90883-79db-4903-87ab-828b9608f9fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:31 crc kubenswrapper[4932]: I0218 19:54:31.022342 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bd90883-79db-4903-87ab-828b9608f9fa-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:31 crc kubenswrapper[4932]: I0218 19:54:31.022381 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bd90883-79db-4903-87ab-828b9608f9fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:31 crc kubenswrapper[4932]: I0218 19:54:31.022393 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkk7w\" (UniqueName: \"kubernetes.io/projected/5bd90883-79db-4903-87ab-828b9608f9fa-kube-api-access-jkk7w\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:31 crc kubenswrapper[4932]: I0218 19:54:31.222144 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8557cf8c94-8d7qp"] Feb 18 19:54:31 crc kubenswrapper[4932]: W0218 19:54:31.237745 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode104e849_d054_4208_8b93_823e82c2627f.slice/crio-9aceefcd1fa71ec3be26bb488250662ae843e663d9ec3de27ded1b3b424ffcc9 WatchSource:0}: Error finding container 9aceefcd1fa71ec3be26bb488250662ae843e663d9ec3de27ded1b3b424ffcc9: Status 404 returned error can't find the container with id 9aceefcd1fa71ec3be26bb488250662ae843e663d9ec3de27ded1b3b424ffcc9 Feb 18 19:54:31 crc kubenswrapper[4932]: I0218 19:54:31.933822 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57c4489bcf-qchgn" event={"ID":"91d8a414-576a-4c50-990e-3daa2724ecb1","Type":"ContainerStarted","Data":"16c1d684fe935ad4f6a91d59e272a507b07fb7acf4d9f7cbf831c127b7702151"} Feb 18 19:54:31 crc kubenswrapper[4932]: I0218 19:54:31.934194 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:31 crc kubenswrapper[4932]: I0218 19:54:31.941021 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8557cf8c94-8d7qp" event={"ID":"e104e849-d054-4208-8b93-823e82c2627f","Type":"ContainerStarted","Data":"9cd2acd89dcd9fd17bf09fce3fbd1b66e182a4bc1dde9c06a01f7196afec0550"} Feb 18 19:54:31 crc kubenswrapper[4932]: I0218 19:54:31.941069 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8557cf8c94-8d7qp" event={"ID":"e104e849-d054-4208-8b93-823e82c2627f","Type":"ContainerStarted","Data":"59f2d3398355a74dd1ca4ccdfb932421d1f49c78319ed7d65537d34c0a82a39e"} Feb 18 19:54:31 crc kubenswrapper[4932]: I0218 19:54:31.941082 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8557cf8c94-8d7qp" event={"ID":"e104e849-d054-4208-8b93-823e82c2627f","Type":"ContainerStarted","Data":"9aceefcd1fa71ec3be26bb488250662ae843e663d9ec3de27ded1b3b424ffcc9"} Feb 18 19:54:31 crc kubenswrapper[4932]: I0218 19:54:31.941098 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:31 crc kubenswrapper[4932]: I0218 19:54:31.941124 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:31 crc kubenswrapper[4932]: I0218 19:54:31.990332 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-8557cf8c94-8d7qp" podStartSLOduration=2.990308771 podStartE2EDuration="2.990308771s" podCreationTimestamp="2026-02-18 19:54:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:31.988717902 +0000 UTC m=+1235.570672767" watchObservedRunningTime="2026-02-18 19:54:31.990308771 +0000 UTC m=+1235.572263616" Feb 18 19:54:31 crc kubenswrapper[4932]: I0218 19:54:31.991408 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-57c4489bcf-qchgn" podStartSLOduration=3.991399818 podStartE2EDuration="3.991399818s" podCreationTimestamp="2026-02-18 19:54:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:31.966589463 +0000 UTC m=+1235.548544308" watchObservedRunningTime="2026-02-18 19:54:31.991399818 +0000 UTC m=+1235.573354663" Feb 18 19:54:32 crc kubenswrapper[4932]: I0218 19:54:32.169148 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:32 crc kubenswrapper[4932]: I0218 19:54:32.433709 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:32 crc kubenswrapper[4932]: I0218 19:54:32.512907 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 19:54:32 crc kubenswrapper[4932]: I0218 19:54:32.707518 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 18 19:54:32 crc kubenswrapper[4932]: I0218 19:54:32.785363 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:54:32 crc kubenswrapper[4932]: I0218 19:54:32.844589 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b855db8f7-mh8jh"] Feb 18 19:54:32 crc kubenswrapper[4932]: I0218 19:54:32.844839 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" podUID="0affb7f8-ebd4-4d8d-b41c-dd968316038d" containerName="dnsmasq-dns" containerID="cri-o://2d361f5ee2e639590515e4ecc00afe9e1165bed305ccaa12c6c66870e4ddb38d" gracePeriod=10 Feb 18 19:54:32 crc kubenswrapper[4932]: I0218 19:54:32.968284 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f81248d0-bf30-4447-ad78-7bfe9048bbea","Type":"ContainerStarted","Data":"4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c"} Feb 18 19:54:32 crc kubenswrapper[4932]: I0218 19:54:32.970525 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:54:32 crc kubenswrapper[4932]: I0218 19:54:32.979065 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" podUID="0affb7f8-ebd4-4d8d-b41c-dd968316038d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.159:5353: connect: connection refused" Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.004744 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.05723253 podStartE2EDuration="7.004723726s" podCreationTimestamp="2026-02-18 19:54:26 +0000 UTC" firstStartedPulling="2026-02-18 19:54:28.081835163 +0000 UTC m=+1231.663790008" lastFinishedPulling="2026-02-18 19:54:32.029326359 +0000 UTC m=+1235.611281204" observedRunningTime="2026-02-18 19:54:32.992980965 +0000 UTC m=+1236.574935820" watchObservedRunningTime="2026-02-18 19:54:33.004723726 +0000 UTC m=+1236.586678571" Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.086925 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.450121 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.585086 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d668s\" (UniqueName: \"kubernetes.io/projected/0affb7f8-ebd4-4d8d-b41c-dd968316038d-kube-api-access-d668s\") pod \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.585298 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-dns-svc\") pod \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.585345 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-ovsdbserver-sb\") pod \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.585371 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-dns-swift-storage-0\") pod \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.585438 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-config\") pod \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.585482 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-ovsdbserver-nb\") pod \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\" (UID: \"0affb7f8-ebd4-4d8d-b41c-dd968316038d\") " Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.592623 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0affb7f8-ebd4-4d8d-b41c-dd968316038d-kube-api-access-d668s" (OuterVolumeSpecName: "kube-api-access-d668s") pod "0affb7f8-ebd4-4d8d-b41c-dd968316038d" (UID: "0affb7f8-ebd4-4d8d-b41c-dd968316038d"). InnerVolumeSpecName "kube-api-access-d668s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.650345 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0affb7f8-ebd4-4d8d-b41c-dd968316038d" (UID: "0affb7f8-ebd4-4d8d-b41c-dd968316038d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.653704 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0affb7f8-ebd4-4d8d-b41c-dd968316038d" (UID: "0affb7f8-ebd4-4d8d-b41c-dd968316038d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.655502 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0affb7f8-ebd4-4d8d-b41c-dd968316038d" (UID: "0affb7f8-ebd4-4d8d-b41c-dd968316038d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.668513 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0affb7f8-ebd4-4d8d-b41c-dd968316038d" (UID: "0affb7f8-ebd4-4d8d-b41c-dd968316038d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.689552 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.689587 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.689599 4932 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.689609 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.689619 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d668s\" (UniqueName: \"kubernetes.io/projected/0affb7f8-ebd4-4d8d-b41c-dd968316038d-kube-api-access-d668s\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.703799 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-config" (OuterVolumeSpecName: "config") pod "0affb7f8-ebd4-4d8d-b41c-dd968316038d" (UID: "0affb7f8-ebd4-4d8d-b41c-dd968316038d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.791640 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0affb7f8-ebd4-4d8d-b41c-dd968316038d-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.979880 4932 generic.go:334] "Generic (PLEG): container finished" podID="0affb7f8-ebd4-4d8d-b41c-dd968316038d" containerID="2d361f5ee2e639590515e4ecc00afe9e1165bed305ccaa12c6c66870e4ddb38d" exitCode=0 Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.979941 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.979985 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" event={"ID":"0affb7f8-ebd4-4d8d-b41c-dd968316038d","Type":"ContainerDied","Data":"2d361f5ee2e639590515e4ecc00afe9e1165bed305ccaa12c6c66870e4ddb38d"} Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.980049 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b855db8f7-mh8jh" event={"ID":"0affb7f8-ebd4-4d8d-b41c-dd968316038d","Type":"ContainerDied","Data":"58e783f05bfc925c4081556f019c7c54bdb33f3d7590e9cb651eb5ff2a823274"} Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.980079 4932 scope.go:117] "RemoveContainer" containerID="2d361f5ee2e639590515e4ecc00afe9e1165bed305ccaa12c6c66870e4ddb38d" Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.980845 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="08fb57b1-f237-4913-8897-a21202273268" containerName="cinder-scheduler" containerID="cri-o://826d7bde22e7ab6a3d0df6df6da88c633402dd91ebe5f63b969da715cc6463fb" gracePeriod=30 Feb 18 19:54:33 crc kubenswrapper[4932]: I0218 19:54:33.981103 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="08fb57b1-f237-4913-8897-a21202273268" containerName="probe" containerID="cri-o://f4f12776aa0b9f3ee7ffe0d125a1f8a071de01b16c40b8ca0211c7c0d0a4a3e7" gracePeriod=30 Feb 18 19:54:34 crc kubenswrapper[4932]: I0218 19:54:34.029621 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b855db8f7-mh8jh"] Feb 18 19:54:34 crc kubenswrapper[4932]: I0218 19:54:34.039043 4932 scope.go:117] "RemoveContainer" containerID="6643cdfc456b2b994b88e3fc8de96cb24f5e66ea517d0846fe9ab3b3661927d4" Feb 18 19:54:34 crc kubenswrapper[4932]: I0218 19:54:34.042154 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b855db8f7-mh8jh"] Feb 18 19:54:34 crc kubenswrapper[4932]: I0218 19:54:34.064495 4932 scope.go:117] "RemoveContainer" containerID="2d361f5ee2e639590515e4ecc00afe9e1165bed305ccaa12c6c66870e4ddb38d" Feb 18 19:54:34 crc kubenswrapper[4932]: E0218 19:54:34.065130 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d361f5ee2e639590515e4ecc00afe9e1165bed305ccaa12c6c66870e4ddb38d\": container with ID starting with 2d361f5ee2e639590515e4ecc00afe9e1165bed305ccaa12c6c66870e4ddb38d not found: ID does not exist" containerID="2d361f5ee2e639590515e4ecc00afe9e1165bed305ccaa12c6c66870e4ddb38d" Feb 18 19:54:34 crc kubenswrapper[4932]: I0218 19:54:34.065185 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d361f5ee2e639590515e4ecc00afe9e1165bed305ccaa12c6c66870e4ddb38d"} err="failed to get container status \"2d361f5ee2e639590515e4ecc00afe9e1165bed305ccaa12c6c66870e4ddb38d\": rpc error: code = NotFound desc = could not find container \"2d361f5ee2e639590515e4ecc00afe9e1165bed305ccaa12c6c66870e4ddb38d\": container with ID starting with 2d361f5ee2e639590515e4ecc00afe9e1165bed305ccaa12c6c66870e4ddb38d not found: ID does not exist" Feb 18 19:54:34 crc kubenswrapper[4932]: I0218 19:54:34.065211 4932 scope.go:117] "RemoveContainer" containerID="6643cdfc456b2b994b88e3fc8de96cb24f5e66ea517d0846fe9ab3b3661927d4" Feb 18 19:54:34 crc kubenswrapper[4932]: E0218 19:54:34.065505 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6643cdfc456b2b994b88e3fc8de96cb24f5e66ea517d0846fe9ab3b3661927d4\": container with ID starting with 6643cdfc456b2b994b88e3fc8de96cb24f5e66ea517d0846fe9ab3b3661927d4 not found: ID does not exist" containerID="6643cdfc456b2b994b88e3fc8de96cb24f5e66ea517d0846fe9ab3b3661927d4" Feb 18 19:54:34 crc kubenswrapper[4932]: I0218 19:54:34.065559 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6643cdfc456b2b994b88e3fc8de96cb24f5e66ea517d0846fe9ab3b3661927d4"} err="failed to get container status \"6643cdfc456b2b994b88e3fc8de96cb24f5e66ea517d0846fe9ab3b3661927d4\": rpc error: code = NotFound desc = could not find container \"6643cdfc456b2b994b88e3fc8de96cb24f5e66ea517d0846fe9ab3b3661927d4\": container with ID starting with 6643cdfc456b2b994b88e3fc8de96cb24f5e66ea517d0846fe9ab3b3661927d4 not found: ID does not exist" Feb 18 19:54:34 crc kubenswrapper[4932]: I0218 19:54:34.993732 4932 generic.go:334] "Generic (PLEG): container finished" podID="08fb57b1-f237-4913-8897-a21202273268" containerID="f4f12776aa0b9f3ee7ffe0d125a1f8a071de01b16c40b8ca0211c7c0d0a4a3e7" exitCode=0 Feb 18 19:54:34 crc kubenswrapper[4932]: I0218 19:54:34.994885 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"08fb57b1-f237-4913-8897-a21202273268","Type":"ContainerDied","Data":"f4f12776aa0b9f3ee7ffe0d125a1f8a071de01b16c40b8ca0211c7c0d0a4a3e7"} Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.189890 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0affb7f8-ebd4-4d8d-b41c-dd968316038d" path="/var/lib/kubelet/pods/0affb7f8-ebd4-4d8d-b41c-dd968316038d/volumes" Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.640936 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.734568 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-combined-ca-bundle\") pod \"08fb57b1-f237-4913-8897-a21202273268\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.734638 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfcnj\" (UniqueName: \"kubernetes.io/projected/08fb57b1-f237-4913-8897-a21202273268-kube-api-access-lfcnj\") pod \"08fb57b1-f237-4913-8897-a21202273268\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.734822 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-config-data\") pod \"08fb57b1-f237-4913-8897-a21202273268\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.734853 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/08fb57b1-f237-4913-8897-a21202273268-etc-machine-id\") pod \"08fb57b1-f237-4913-8897-a21202273268\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.734913 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-scripts\") pod \"08fb57b1-f237-4913-8897-a21202273268\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.734946 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-config-data-custom\") pod \"08fb57b1-f237-4913-8897-a21202273268\" (UID: \"08fb57b1-f237-4913-8897-a21202273268\") " Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.735682 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08fb57b1-f237-4913-8897-a21202273268-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "08fb57b1-f237-4913-8897-a21202273268" (UID: "08fb57b1-f237-4913-8897-a21202273268"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.737039 4932 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/08fb57b1-f237-4913-8897-a21202273268-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.742014 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08fb57b1-f237-4913-8897-a21202273268-kube-api-access-lfcnj" (OuterVolumeSpecName: "kube-api-access-lfcnj") pod "08fb57b1-f237-4913-8897-a21202273268" (UID: "08fb57b1-f237-4913-8897-a21202273268"). InnerVolumeSpecName "kube-api-access-lfcnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.744272 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-scripts" (OuterVolumeSpecName: "scripts") pod "08fb57b1-f237-4913-8897-a21202273268" (UID: "08fb57b1-f237-4913-8897-a21202273268"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.744818 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "08fb57b1-f237-4913-8897-a21202273268" (UID: "08fb57b1-f237-4913-8897-a21202273268"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.795236 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08fb57b1-f237-4913-8897-a21202273268" (UID: "08fb57b1-f237-4913-8897-a21202273268"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.831013 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-config-data" (OuterVolumeSpecName: "config-data") pod "08fb57b1-f237-4913-8897-a21202273268" (UID: "08fb57b1-f237-4913-8897-a21202273268"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.839598 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.839637 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.839649 4932 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.839669 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08fb57b1-f237-4913-8897-a21202273268-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.839681 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfcnj\" (UniqueName: \"kubernetes.io/projected/08fb57b1-f237-4913-8897-a21202273268-kube-api-access-lfcnj\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:35 crc kubenswrapper[4932]: I0218 19:54:35.900372 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.022639 4932 generic.go:334] "Generic (PLEG): container finished" podID="08fb57b1-f237-4913-8897-a21202273268" containerID="826d7bde22e7ab6a3d0df6df6da88c633402dd91ebe5f63b969da715cc6463fb" exitCode=0 Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.022696 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"08fb57b1-f237-4913-8897-a21202273268","Type":"ContainerDied","Data":"826d7bde22e7ab6a3d0df6df6da88c633402dd91ebe5f63b969da715cc6463fb"} Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.022702 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.022728 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"08fb57b1-f237-4913-8897-a21202273268","Type":"ContainerDied","Data":"7e142471735b8d8ede9ef15b6d6b2ffab0ed91de871dc5c56cbedc4c3564c6af"} Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.022746 4932 scope.go:117] "RemoveContainer" containerID="f4f12776aa0b9f3ee7ffe0d125a1f8a071de01b16c40b8ca0211c7c0d0a4a3e7" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.135139 4932 scope.go:117] "RemoveContainer" containerID="826d7bde22e7ab6a3d0df6df6da88c633402dd91ebe5f63b969da715cc6463fb" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.139270 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.153303 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.192359 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:54:36 crc kubenswrapper[4932]: E0218 19:54:36.192861 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08fb57b1-f237-4913-8897-a21202273268" containerName="cinder-scheduler" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.192881 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="08fb57b1-f237-4913-8897-a21202273268" containerName="cinder-scheduler" Feb 18 19:54:36 crc kubenswrapper[4932]: E0218 19:54:36.192892 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0affb7f8-ebd4-4d8d-b41c-dd968316038d" containerName="init" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.192918 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="0affb7f8-ebd4-4d8d-b41c-dd968316038d" containerName="init" Feb 18 19:54:36 crc kubenswrapper[4932]: E0218 19:54:36.192935 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08fb57b1-f237-4913-8897-a21202273268" containerName="probe" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.192941 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="08fb57b1-f237-4913-8897-a21202273268" containerName="probe" Feb 18 19:54:36 crc kubenswrapper[4932]: E0218 19:54:36.192953 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bd90883-79db-4903-87ab-828b9608f9fa" containerName="watcher-applier" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.192958 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bd90883-79db-4903-87ab-828b9608f9fa" containerName="watcher-applier" Feb 18 19:54:36 crc kubenswrapper[4932]: E0218 19:54:36.192967 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0affb7f8-ebd4-4d8d-b41c-dd968316038d" containerName="dnsmasq-dns" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.192973 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="0affb7f8-ebd4-4d8d-b41c-dd968316038d" containerName="dnsmasq-dns" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.193149 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="0affb7f8-ebd4-4d8d-b41c-dd968316038d" containerName="dnsmasq-dns" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.193184 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bd90883-79db-4903-87ab-828b9608f9fa" containerName="watcher-applier" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.193194 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="08fb57b1-f237-4913-8897-a21202273268" containerName="probe" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.193207 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="08fb57b1-f237-4913-8897-a21202273268" containerName="cinder-scheduler" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.194198 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.196331 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.207102 4932 scope.go:117] "RemoveContainer" containerID="f4f12776aa0b9f3ee7ffe0d125a1f8a071de01b16c40b8ca0211c7c0d0a4a3e7" Feb 18 19:54:36 crc kubenswrapper[4932]: E0218 19:54:36.207861 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4f12776aa0b9f3ee7ffe0d125a1f8a071de01b16c40b8ca0211c7c0d0a4a3e7\": container with ID starting with f4f12776aa0b9f3ee7ffe0d125a1f8a071de01b16c40b8ca0211c7c0d0a4a3e7 not found: ID does not exist" containerID="f4f12776aa0b9f3ee7ffe0d125a1f8a071de01b16c40b8ca0211c7c0d0a4a3e7" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.207960 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4f12776aa0b9f3ee7ffe0d125a1f8a071de01b16c40b8ca0211c7c0d0a4a3e7"} err="failed to get container status \"f4f12776aa0b9f3ee7ffe0d125a1f8a071de01b16c40b8ca0211c7c0d0a4a3e7\": rpc error: code = NotFound desc = could not find container \"f4f12776aa0b9f3ee7ffe0d125a1f8a071de01b16c40b8ca0211c7c0d0a4a3e7\": container with ID starting with f4f12776aa0b9f3ee7ffe0d125a1f8a071de01b16c40b8ca0211c7c0d0a4a3e7 not found: ID does not exist" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.208040 4932 scope.go:117] "RemoveContainer" containerID="826d7bde22e7ab6a3d0df6df6da88c633402dd91ebe5f63b969da715cc6463fb" Feb 18 19:54:36 crc kubenswrapper[4932]: E0218 19:54:36.208656 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"826d7bde22e7ab6a3d0df6df6da88c633402dd91ebe5f63b969da715cc6463fb\": container with ID starting with 826d7bde22e7ab6a3d0df6df6da88c633402dd91ebe5f63b969da715cc6463fb not found: ID does not exist" containerID="826d7bde22e7ab6a3d0df6df6da88c633402dd91ebe5f63b969da715cc6463fb" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.208685 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"826d7bde22e7ab6a3d0df6df6da88c633402dd91ebe5f63b969da715cc6463fb"} err="failed to get container status \"826d7bde22e7ab6a3d0df6df6da88c633402dd91ebe5f63b969da715cc6463fb\": rpc error: code = NotFound desc = could not find container \"826d7bde22e7ab6a3d0df6df6da88c633402dd91ebe5f63b969da715cc6463fb\": container with ID starting with 826d7bde22e7ab6a3d0df6df6da88c633402dd91ebe5f63b969da715cc6463fb not found: ID does not exist" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.214401 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.367084 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35c97753-d0c4-44cf-abe0-f529c2899b7d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.367420 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/35c97753-d0c4-44cf-abe0-f529c2899b7d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.367520 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdxh7\" (UniqueName: \"kubernetes.io/projected/35c97753-d0c4-44cf-abe0-f529c2899b7d-kube-api-access-gdxh7\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.367696 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c97753-d0c4-44cf-abe0-f529c2899b7d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.367800 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35c97753-d0c4-44cf-abe0-f529c2899b7d-scripts\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.367969 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35c97753-d0c4-44cf-abe0-f529c2899b7d-config-data\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.470446 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c97753-d0c4-44cf-abe0-f529c2899b7d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.470852 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35c97753-d0c4-44cf-abe0-f529c2899b7d-scripts\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.471072 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35c97753-d0c4-44cf-abe0-f529c2899b7d-config-data\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.471191 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35c97753-d0c4-44cf-abe0-f529c2899b7d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.471255 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/35c97753-d0c4-44cf-abe0-f529c2899b7d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.471333 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdxh7\" (UniqueName: \"kubernetes.io/projected/35c97753-d0c4-44cf-abe0-f529c2899b7d-kube-api-access-gdxh7\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.471379 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/35c97753-d0c4-44cf-abe0-f529c2899b7d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.476907 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35c97753-d0c4-44cf-abe0-f529c2899b7d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.477019 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c97753-d0c4-44cf-abe0-f529c2899b7d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.477047 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35c97753-d0c4-44cf-abe0-f529c2899b7d-scripts\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.479984 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35c97753-d0c4-44cf-abe0-f529c2899b7d-config-data\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.495896 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdxh7\" (UniqueName: \"kubernetes.io/projected/35c97753-d0c4-44cf-abe0-f529c2899b7d-kube-api-access-gdxh7\") pod \"cinder-scheduler-0\" (UID: \"35c97753-d0c4-44cf-abe0-f529c2899b7d\") " pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.545723 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:54:36 crc kubenswrapper[4932]: I0218 19:54:36.759213 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-75df984768-5mv9k" podUID="dec0e208-2bfc-4661-8395-c56418bb9307" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.164:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.164:8443: connect: connection refused" Feb 18 19:54:37 crc kubenswrapper[4932]: I0218 19:54:37.104809 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:54:37 crc kubenswrapper[4932]: I0218 19:54:37.207290 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08fb57b1-f237-4913-8897-a21202273268" path="/var/lib/kubelet/pods/08fb57b1-f237-4913-8897-a21202273268/volumes" Feb 18 19:54:37 crc kubenswrapper[4932]: I0218 19:54:37.740095 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 18 19:54:37 crc kubenswrapper[4932]: I0218 19:54:37.741703 4932 scope.go:117] "RemoveContainer" containerID="a6a123d69d1a4e46268f089397aa0f2920ef5932f2828721d8716a7b45e1e942" Feb 18 19:54:37 crc kubenswrapper[4932]: I0218 19:54:37.742265 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:54:38 crc kubenswrapper[4932]: I0218 19:54:38.045592 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0882c686-1b07-4ac7-a6be-148eff7faa19","Type":"ContainerStarted","Data":"ed7bb6ecb9363ec214feff66ba52e868e0754894a65c87ef835d3f25e4d547c0"} Feb 18 19:54:38 crc kubenswrapper[4932]: I0218 19:54:38.047321 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"35c97753-d0c4-44cf-abe0-f529c2899b7d","Type":"ContainerStarted","Data":"5ce47eaf86d8b72ef803255871f90147374de82bc9885aece126a16a0fc4ba11"} Feb 18 19:54:38 crc kubenswrapper[4932]: I0218 19:54:38.047348 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"35c97753-d0c4-44cf-abe0-f529c2899b7d","Type":"ContainerStarted","Data":"7066cc7e29345081d0b6a878585df3450e6342538a5bebfb4b825a53f1fd11b0"} Feb 18 19:54:39 crc kubenswrapper[4932]: I0218 19:54:39.058867 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"35c97753-d0c4-44cf-abe0-f529c2899b7d","Type":"ContainerStarted","Data":"1e39bf1869e3a887cfef23f8406dff251742f8aa0fce2d1942783af5ab5ea984"} Feb 18 19:54:39 crc kubenswrapper[4932]: I0218 19:54:39.080191 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.080158497 podStartE2EDuration="3.080158497s" podCreationTimestamp="2026-02-18 19:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:39.075167523 +0000 UTC m=+1242.657122378" watchObservedRunningTime="2026-02-18 19:54:39.080158497 +0000 UTC m=+1242.662113342" Feb 18 19:54:40 crc kubenswrapper[4932]: I0218 19:54:40.087635 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:40 crc kubenswrapper[4932]: I0218 19:54:40.092088 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:41 crc kubenswrapper[4932]: I0218 19:54:41.084296 4932 generic.go:334] "Generic (PLEG): container finished" podID="0882c686-1b07-4ac7-a6be-148eff7faa19" containerID="ed7bb6ecb9363ec214feff66ba52e868e0754894a65c87ef835d3f25e4d547c0" exitCode=1 Feb 18 19:54:41 crc kubenswrapper[4932]: I0218 19:54:41.084369 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0882c686-1b07-4ac7-a6be-148eff7faa19","Type":"ContainerDied","Data":"ed7bb6ecb9363ec214feff66ba52e868e0754894a65c87ef835d3f25e4d547c0"} Feb 18 19:54:41 crc kubenswrapper[4932]: I0218 19:54:41.084579 4932 scope.go:117] "RemoveContainer" containerID="a6a123d69d1a4e46268f089397aa0f2920ef5932f2828721d8716a7b45e1e942" Feb 18 19:54:41 crc kubenswrapper[4932]: I0218 19:54:41.085547 4932 scope.go:117] "RemoveContainer" containerID="ed7bb6ecb9363ec214feff66ba52e868e0754894a65c87ef835d3f25e4d547c0" Feb 18 19:54:41 crc kubenswrapper[4932]: E0218 19:54:41.085975 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(0882c686-1b07-4ac7-a6be-148eff7faa19)\"" pod="openstack/watcher-decision-engine-0" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" Feb 18 19:54:41 crc kubenswrapper[4932]: I0218 19:54:41.120881 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5dc9dbf7f4-c6vxb" Feb 18 19:54:41 crc kubenswrapper[4932]: I0218 19:54:41.546427 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 19:54:41 crc kubenswrapper[4932]: I0218 19:54:41.804402 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:41 crc kubenswrapper[4932]: I0218 19:54:41.924800 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8557cf8c94-8d7qp" Feb 18 19:54:42 crc kubenswrapper[4932]: I0218 19:54:42.010953 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7449c5884b-q9l4k"] Feb 18 19:54:42 crc kubenswrapper[4932]: I0218 19:54:42.016338 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7449c5884b-q9l4k" podUID="505f490e-dca8-49ae-aeeb-3392c065d841" containerName="barbican-api-log" containerID="cri-o://98b2099bab8a6e146a0799442e00594a5328d749f9417d06bb347ef9fb18f009" gracePeriod=30 Feb 18 19:54:42 crc kubenswrapper[4932]: I0218 19:54:42.016512 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7449c5884b-q9l4k" podUID="505f490e-dca8-49ae-aeeb-3392c065d841" containerName="barbican-api" containerID="cri-o://7df07bd853489d447f256acaae8700635b716bd7ed59696363bdaa6d7cf3ee38" gracePeriod=30 Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.147334 4932 generic.go:334] "Generic (PLEG): container finished" podID="505f490e-dca8-49ae-aeeb-3392c065d841" containerID="7df07bd853489d447f256acaae8700635b716bd7ed59696363bdaa6d7cf3ee38" exitCode=0 Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.147645 4932 generic.go:334] "Generic (PLEG): container finished" podID="505f490e-dca8-49ae-aeeb-3392c065d841" containerID="98b2099bab8a6e146a0799442e00594a5328d749f9417d06bb347ef9fb18f009" exitCode=143 Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.147664 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7449c5884b-q9l4k" event={"ID":"505f490e-dca8-49ae-aeeb-3392c065d841","Type":"ContainerDied","Data":"7df07bd853489d447f256acaae8700635b716bd7ed59696363bdaa6d7cf3ee38"} Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.147688 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7449c5884b-q9l4k" event={"ID":"505f490e-dca8-49ae-aeeb-3392c065d841","Type":"ContainerDied","Data":"98b2099bab8a6e146a0799442e00594a5328d749f9417d06bb347ef9fb18f009"} Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.341258 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.412588 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/505f490e-dca8-49ae-aeeb-3392c065d841-logs\") pod \"505f490e-dca8-49ae-aeeb-3392c065d841\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.412656 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-config-data-custom\") pod \"505f490e-dca8-49ae-aeeb-3392c065d841\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.412687 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-combined-ca-bundle\") pod \"505f490e-dca8-49ae-aeeb-3392c065d841\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.412708 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9s68\" (UniqueName: \"kubernetes.io/projected/505f490e-dca8-49ae-aeeb-3392c065d841-kube-api-access-b9s68\") pod \"505f490e-dca8-49ae-aeeb-3392c065d841\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.412910 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-config-data\") pod \"505f490e-dca8-49ae-aeeb-3392c065d841\" (UID: \"505f490e-dca8-49ae-aeeb-3392c065d841\") " Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.414284 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/505f490e-dca8-49ae-aeeb-3392c065d841-logs" (OuterVolumeSpecName: "logs") pod "505f490e-dca8-49ae-aeeb-3392c065d841" (UID: "505f490e-dca8-49ae-aeeb-3392c065d841"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.419358 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "505f490e-dca8-49ae-aeeb-3392c065d841" (UID: "505f490e-dca8-49ae-aeeb-3392c065d841"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.422355 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/505f490e-dca8-49ae-aeeb-3392c065d841-kube-api-access-b9s68" (OuterVolumeSpecName: "kube-api-access-b9s68") pod "505f490e-dca8-49ae-aeeb-3392c065d841" (UID: "505f490e-dca8-49ae-aeeb-3392c065d841"). InnerVolumeSpecName "kube-api-access-b9s68". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.465821 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "505f490e-dca8-49ae-aeeb-3392c065d841" (UID: "505f490e-dca8-49ae-aeeb-3392c065d841"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.501987 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-config-data" (OuterVolumeSpecName: "config-data") pod "505f490e-dca8-49ae-aeeb-3392c065d841" (UID: "505f490e-dca8-49ae-aeeb-3392c065d841"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.516607 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/505f490e-dca8-49ae-aeeb-3392c065d841-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.516649 4932 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.516664 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.516679 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9s68\" (UniqueName: \"kubernetes.io/projected/505f490e-dca8-49ae-aeeb-3392c065d841-kube-api-access-b9s68\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.516691 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/505f490e-dca8-49ae-aeeb-3392c065d841-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.912317 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.945203 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85d5f6489d-gxmwz" Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.997302 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-dc76b87d8-4l7z8"] Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.997754 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-dc76b87d8-4l7z8" podUID="86cc3d08-5639-4155-bee3-b1f461184a24" containerName="placement-log" containerID="cri-o://e0a8661d91abe6650a2644a6bbb68f5a9be137c080d73c71bfeeedb79d7a94d1" gracePeriod=30 Feb 18 19:54:43 crc kubenswrapper[4932]: I0218 19:54:43.997805 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-dc76b87d8-4l7z8" podUID="86cc3d08-5639-4155-bee3-b1f461184a24" containerName="placement-api" containerID="cri-o://39753a946eb8a2d631a153f1f2e754fb36ce9fa30fb838383236eaf4f306d8fd" gracePeriod=30 Feb 18 19:54:44 crc kubenswrapper[4932]: I0218 19:54:44.156833 4932 generic.go:334] "Generic (PLEG): container finished" podID="86cc3d08-5639-4155-bee3-b1f461184a24" containerID="e0a8661d91abe6650a2644a6bbb68f5a9be137c080d73c71bfeeedb79d7a94d1" exitCode=143 Feb 18 19:54:44 crc kubenswrapper[4932]: I0218 19:54:44.156890 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dc76b87d8-4l7z8" event={"ID":"86cc3d08-5639-4155-bee3-b1f461184a24","Type":"ContainerDied","Data":"e0a8661d91abe6650a2644a6bbb68f5a9be137c080d73c71bfeeedb79d7a94d1"} Feb 18 19:54:44 crc kubenswrapper[4932]: I0218 19:54:44.166339 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7449c5884b-q9l4k" event={"ID":"505f490e-dca8-49ae-aeeb-3392c065d841","Type":"ContainerDied","Data":"c7d003d8c5cc0d3edc83d2a07bde218aaf6fe754f628f14115b8310796a97a1b"} Feb 18 19:54:44 crc kubenswrapper[4932]: I0218 19:54:44.166371 4932 scope.go:117] "RemoveContainer" containerID="7df07bd853489d447f256acaae8700635b716bd7ed59696363bdaa6d7cf3ee38" Feb 18 19:54:44 crc kubenswrapper[4932]: I0218 19:54:44.166404 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7449c5884b-q9l4k" Feb 18 19:54:44 crc kubenswrapper[4932]: I0218 19:54:44.222238 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7449c5884b-q9l4k"] Feb 18 19:54:44 crc kubenswrapper[4932]: I0218 19:54:44.223197 4932 scope.go:117] "RemoveContainer" containerID="98b2099bab8a6e146a0799442e00594a5328d749f9417d06bb347ef9fb18f009" Feb 18 19:54:44 crc kubenswrapper[4932]: I0218 19:54:44.229150 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7449c5884b-q9l4k"] Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.190001 4932 generic.go:334] "Generic (PLEG): container finished" podID="86cc3d08-5639-4155-bee3-b1f461184a24" containerID="39753a946eb8a2d631a153f1f2e754fb36ce9fa30fb838383236eaf4f306d8fd" exitCode=0 Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.214827 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="505f490e-dca8-49ae-aeeb-3392c065d841" path="/var/lib/kubelet/pods/505f490e-dca8-49ae-aeeb-3392c065d841/volumes" Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.215356 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dc76b87d8-4l7z8" event={"ID":"86cc3d08-5639-4155-bee3-b1f461184a24","Type":"ContainerDied","Data":"39753a946eb8a2d631a153f1f2e754fb36ce9fa30fb838383236eaf4f306d8fd"} Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.301856 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.362825 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-combined-ca-bundle\") pod \"86cc3d08-5639-4155-bee3-b1f461184a24\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.362892 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq2zn\" (UniqueName: \"kubernetes.io/projected/86cc3d08-5639-4155-bee3-b1f461184a24-kube-api-access-hq2zn\") pod \"86cc3d08-5639-4155-bee3-b1f461184a24\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.362946 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-config-data\") pod \"86cc3d08-5639-4155-bee3-b1f461184a24\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.363017 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-internal-tls-certs\") pod \"86cc3d08-5639-4155-bee3-b1f461184a24\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.363043 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-scripts\") pod \"86cc3d08-5639-4155-bee3-b1f461184a24\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.363153 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-public-tls-certs\") pod \"86cc3d08-5639-4155-bee3-b1f461184a24\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.363263 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86cc3d08-5639-4155-bee3-b1f461184a24-logs\") pod \"86cc3d08-5639-4155-bee3-b1f461184a24\" (UID: \"86cc3d08-5639-4155-bee3-b1f461184a24\") " Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.364154 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86cc3d08-5639-4155-bee3-b1f461184a24-logs" (OuterVolumeSpecName: "logs") pod "86cc3d08-5639-4155-bee3-b1f461184a24" (UID: "86cc3d08-5639-4155-bee3-b1f461184a24"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.371336 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-scripts" (OuterVolumeSpecName: "scripts") pod "86cc3d08-5639-4155-bee3-b1f461184a24" (UID: "86cc3d08-5639-4155-bee3-b1f461184a24"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.381670 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86cc3d08-5639-4155-bee3-b1f461184a24-kube-api-access-hq2zn" (OuterVolumeSpecName: "kube-api-access-hq2zn") pod "86cc3d08-5639-4155-bee3-b1f461184a24" (UID: "86cc3d08-5639-4155-bee3-b1f461184a24"). InnerVolumeSpecName "kube-api-access-hq2zn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.437778 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86cc3d08-5639-4155-bee3-b1f461184a24" (UID: "86cc3d08-5639-4155-bee3-b1f461184a24"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.466071 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86cc3d08-5639-4155-bee3-b1f461184a24-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.466104 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.466113 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq2zn\" (UniqueName: \"kubernetes.io/projected/86cc3d08-5639-4155-bee3-b1f461184a24-kube-api-access-hq2zn\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.466122 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.484596 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "86cc3d08-5639-4155-bee3-b1f461184a24" (UID: "86cc3d08-5639-4155-bee3-b1f461184a24"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.490893 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-config-data" (OuterVolumeSpecName: "config-data") pod "86cc3d08-5639-4155-bee3-b1f461184a24" (UID: "86cc3d08-5639-4155-bee3-b1f461184a24"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.500978 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "86cc3d08-5639-4155-bee3-b1f461184a24" (UID: "86cc3d08-5639-4155-bee3-b1f461184a24"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.567682 4932 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.567712 4932 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:45 crc kubenswrapper[4932]: I0218 19:54:45.567724 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86cc3d08-5639-4155-bee3-b1f461184a24-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.202687 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dc76b87d8-4l7z8" event={"ID":"86cc3d08-5639-4155-bee3-b1f461184a24","Type":"ContainerDied","Data":"f46833318ddc8961d6f04764c058cb88d8c7c195fabe7b752747972666313452"} Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.202754 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dc76b87d8-4l7z8" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.202760 4932 scope.go:117] "RemoveContainer" containerID="39753a946eb8a2d631a153f1f2e754fb36ce9fa30fb838383236eaf4f306d8fd" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.237452 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-dc76b87d8-4l7z8"] Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.239348 4932 scope.go:117] "RemoveContainer" containerID="e0a8661d91abe6650a2644a6bbb68f5a9be137c080d73c71bfeeedb79d7a94d1" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.240997 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-dc76b87d8-4l7z8"] Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.263786 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 18 19:54:46 crc kubenswrapper[4932]: E0218 19:54:46.264398 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="505f490e-dca8-49ae-aeeb-3392c065d841" containerName="barbican-api-log" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.264411 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="505f490e-dca8-49ae-aeeb-3392c065d841" containerName="barbican-api-log" Feb 18 19:54:46 crc kubenswrapper[4932]: E0218 19:54:46.264423 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="505f490e-dca8-49ae-aeeb-3392c065d841" containerName="barbican-api" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.264430 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="505f490e-dca8-49ae-aeeb-3392c065d841" containerName="barbican-api" Feb 18 19:54:46 crc kubenswrapper[4932]: E0218 19:54:46.264439 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86cc3d08-5639-4155-bee3-b1f461184a24" containerName="placement-log" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.264447 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="86cc3d08-5639-4155-bee3-b1f461184a24" containerName="placement-log" Feb 18 19:54:46 crc kubenswrapper[4932]: E0218 19:54:46.264460 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86cc3d08-5639-4155-bee3-b1f461184a24" containerName="placement-api" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.264466 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="86cc3d08-5639-4155-bee3-b1f461184a24" containerName="placement-api" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.264640 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="86cc3d08-5639-4155-bee3-b1f461184a24" containerName="placement-log" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.264688 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="505f490e-dca8-49ae-aeeb-3392c065d841" containerName="barbican-api" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.264700 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="505f490e-dca8-49ae-aeeb-3392c065d841" containerName="barbican-api-log" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.264717 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="86cc3d08-5639-4155-bee3-b1f461184a24" containerName="placement-api" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.265516 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.267058 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.267592 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.269963 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-k6mtn" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.289736 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.403375 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/51bb24d5-d8d7-4bbb-a236-4967f9f7ece5-openstack-config\") pod \"openstackclient\" (UID: \"51bb24d5-d8d7-4bbb-a236-4967f9f7ece5\") " pod="openstack/openstackclient" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.403452 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51bb24d5-d8d7-4bbb-a236-4967f9f7ece5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"51bb24d5-d8d7-4bbb-a236-4967f9f7ece5\") " pod="openstack/openstackclient" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.403779 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b926\" (UniqueName: \"kubernetes.io/projected/51bb24d5-d8d7-4bbb-a236-4967f9f7ece5-kube-api-access-9b926\") pod \"openstackclient\" (UID: \"51bb24d5-d8d7-4bbb-a236-4967f9f7ece5\") " pod="openstack/openstackclient" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.403882 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/51bb24d5-d8d7-4bbb-a236-4967f9f7ece5-openstack-config-secret\") pod \"openstackclient\" (UID: \"51bb24d5-d8d7-4bbb-a236-4967f9f7ece5\") " pod="openstack/openstackclient" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.505449 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/51bb24d5-d8d7-4bbb-a236-4967f9f7ece5-openstack-config\") pod \"openstackclient\" (UID: \"51bb24d5-d8d7-4bbb-a236-4967f9f7ece5\") " pod="openstack/openstackclient" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.505496 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51bb24d5-d8d7-4bbb-a236-4967f9f7ece5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"51bb24d5-d8d7-4bbb-a236-4967f9f7ece5\") " pod="openstack/openstackclient" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.505544 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b926\" (UniqueName: \"kubernetes.io/projected/51bb24d5-d8d7-4bbb-a236-4967f9f7ece5-kube-api-access-9b926\") pod \"openstackclient\" (UID: \"51bb24d5-d8d7-4bbb-a236-4967f9f7ece5\") " pod="openstack/openstackclient" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.505563 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/51bb24d5-d8d7-4bbb-a236-4967f9f7ece5-openstack-config-secret\") pod \"openstackclient\" (UID: \"51bb24d5-d8d7-4bbb-a236-4967f9f7ece5\") " pod="openstack/openstackclient" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.507199 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/51bb24d5-d8d7-4bbb-a236-4967f9f7ece5-openstack-config\") pod \"openstackclient\" (UID: \"51bb24d5-d8d7-4bbb-a236-4967f9f7ece5\") " pod="openstack/openstackclient" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.510921 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51bb24d5-d8d7-4bbb-a236-4967f9f7ece5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"51bb24d5-d8d7-4bbb-a236-4967f9f7ece5\") " pod="openstack/openstackclient" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.518614 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/51bb24d5-d8d7-4bbb-a236-4967f9f7ece5-openstack-config-secret\") pod \"openstackclient\" (UID: \"51bb24d5-d8d7-4bbb-a236-4967f9f7ece5\") " pod="openstack/openstackclient" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.522744 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b926\" (UniqueName: \"kubernetes.io/projected/51bb24d5-d8d7-4bbb-a236-4967f9f7ece5-kube-api-access-9b926\") pod \"openstackclient\" (UID: \"51bb24d5-d8d7-4bbb-a236-4967f9f7ece5\") " pod="openstack/openstackclient" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.656638 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.737596 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.764492 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-75df984768-5mv9k" podUID="dec0e208-2bfc-4661-8395-c56418bb9307" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.164:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.164:8443: connect: connection refused" Feb 18 19:54:46 crc kubenswrapper[4932]: I0218 19:54:46.764619 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:54:47 crc kubenswrapper[4932]: W0218 19:54:47.158541 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51bb24d5_d8d7_4bbb_a236_4967f9f7ece5.slice/crio-576317358ec062dd59410cfa39c33984861d50e59d78c4fabb992d58e0aa10f3 WatchSource:0}: Error finding container 576317358ec062dd59410cfa39c33984861d50e59d78c4fabb992d58e0aa10f3: Status 404 returned error can't find the container with id 576317358ec062dd59410cfa39c33984861d50e59d78c4fabb992d58e0aa10f3 Feb 18 19:54:47 crc kubenswrapper[4932]: I0218 19:54:47.159026 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 18 19:54:47 crc kubenswrapper[4932]: I0218 19:54:47.193964 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86cc3d08-5639-4155-bee3-b1f461184a24" path="/var/lib/kubelet/pods/86cc3d08-5639-4155-bee3-b1f461184a24/volumes" Feb 18 19:54:47 crc kubenswrapper[4932]: I0218 19:54:47.215036 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"51bb24d5-d8d7-4bbb-a236-4967f9f7ece5","Type":"ContainerStarted","Data":"576317358ec062dd59410cfa39c33984861d50e59d78c4fabb992d58e0aa10f3"} Feb 18 19:54:47 crc kubenswrapper[4932]: I0218 19:54:47.739796 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:54:47 crc kubenswrapper[4932]: I0218 19:54:47.740113 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:54:47 crc kubenswrapper[4932]: I0218 19:54:47.740902 4932 scope.go:117] "RemoveContainer" containerID="ed7bb6ecb9363ec214feff66ba52e868e0754894a65c87ef835d3f25e4d547c0" Feb 18 19:54:47 crc kubenswrapper[4932]: E0218 19:54:47.741155 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(0882c686-1b07-4ac7-a6be-148eff7faa19)\"" pod="openstack/watcher-decision-engine-0" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.241126 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-qlt9g"] Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.242991 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qlt9g" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.260678 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-qlt9g"] Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.346331 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-zxht6"] Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.347746 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zxht6" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.354220 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccc8867f-cb56-47ad-9d08-a25feca678fc-operator-scripts\") pod \"nova-api-db-create-qlt9g\" (UID: \"ccc8867f-cb56-47ad-9d08-a25feca678fc\") " pod="openstack/nova-api-db-create-qlt9g" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.354334 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdj9x\" (UniqueName: \"kubernetes.io/projected/ccc8867f-cb56-47ad-9d08-a25feca678fc-kube-api-access-fdj9x\") pod \"nova-api-db-create-qlt9g\" (UID: \"ccc8867f-cb56-47ad-9d08-a25feca678fc\") " pod="openstack/nova-api-db-create-qlt9g" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.359136 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-a786-account-create-update-jrb5b"] Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.360584 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a786-account-create-update-jrb5b" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.366549 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.367094 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-zxht6"] Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.407225 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a786-account-create-update-jrb5b"] Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.455967 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fds59\" (UniqueName: \"kubernetes.io/projected/a6ae5264-a3f4-4f05-b7ff-942b182ee6e6-kube-api-access-fds59\") pod \"nova-cell0-db-create-zxht6\" (UID: \"a6ae5264-a3f4-4f05-b7ff-942b182ee6e6\") " pod="openstack/nova-cell0-db-create-zxht6" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.456060 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvdhg\" (UniqueName: \"kubernetes.io/projected/aec70d32-3fdc-410f-9d9d-9b108e079cfe-kube-api-access-qvdhg\") pod \"nova-api-a786-account-create-update-jrb5b\" (UID: \"aec70d32-3fdc-410f-9d9d-9b108e079cfe\") " pod="openstack/nova-api-a786-account-create-update-jrb5b" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.456099 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aec70d32-3fdc-410f-9d9d-9b108e079cfe-operator-scripts\") pod \"nova-api-a786-account-create-update-jrb5b\" (UID: \"aec70d32-3fdc-410f-9d9d-9b108e079cfe\") " pod="openstack/nova-api-a786-account-create-update-jrb5b" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.456138 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6ae5264-a3f4-4f05-b7ff-942b182ee6e6-operator-scripts\") pod \"nova-cell0-db-create-zxht6\" (UID: \"a6ae5264-a3f4-4f05-b7ff-942b182ee6e6\") " pod="openstack/nova-cell0-db-create-zxht6" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.456204 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccc8867f-cb56-47ad-9d08-a25feca678fc-operator-scripts\") pod \"nova-api-db-create-qlt9g\" (UID: \"ccc8867f-cb56-47ad-9d08-a25feca678fc\") " pod="openstack/nova-api-db-create-qlt9g" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.456274 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdj9x\" (UniqueName: \"kubernetes.io/projected/ccc8867f-cb56-47ad-9d08-a25feca678fc-kube-api-access-fdj9x\") pod \"nova-api-db-create-qlt9g\" (UID: \"ccc8867f-cb56-47ad-9d08-a25feca678fc\") " pod="openstack/nova-api-db-create-qlt9g" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.457108 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccc8867f-cb56-47ad-9d08-a25feca678fc-operator-scripts\") pod \"nova-api-db-create-qlt9g\" (UID: \"ccc8867f-cb56-47ad-9d08-a25feca678fc\") " pod="openstack/nova-api-db-create-qlt9g" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.480772 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdj9x\" (UniqueName: \"kubernetes.io/projected/ccc8867f-cb56-47ad-9d08-a25feca678fc-kube-api-access-fdj9x\") pod \"nova-api-db-create-qlt9g\" (UID: \"ccc8867f-cb56-47ad-9d08-a25feca678fc\") " pod="openstack/nova-api-db-create-qlt9g" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.538214 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-xdsn5"] Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.539782 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xdsn5" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.548405 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-xdsn5"] Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.554116 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-5405-account-create-update-8fjff"] Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.555363 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5405-account-create-update-8fjff" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.557728 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fds59\" (UniqueName: \"kubernetes.io/projected/a6ae5264-a3f4-4f05-b7ff-942b182ee6e6-kube-api-access-fds59\") pod \"nova-cell0-db-create-zxht6\" (UID: \"a6ae5264-a3f4-4f05-b7ff-942b182ee6e6\") " pod="openstack/nova-cell0-db-create-zxht6" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.557804 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvdhg\" (UniqueName: \"kubernetes.io/projected/aec70d32-3fdc-410f-9d9d-9b108e079cfe-kube-api-access-qvdhg\") pod \"nova-api-a786-account-create-update-jrb5b\" (UID: \"aec70d32-3fdc-410f-9d9d-9b108e079cfe\") " pod="openstack/nova-api-a786-account-create-update-jrb5b" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.557830 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aec70d32-3fdc-410f-9d9d-9b108e079cfe-operator-scripts\") pod \"nova-api-a786-account-create-update-jrb5b\" (UID: \"aec70d32-3fdc-410f-9d9d-9b108e079cfe\") " pod="openstack/nova-api-a786-account-create-update-jrb5b" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.557863 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6ae5264-a3f4-4f05-b7ff-942b182ee6e6-operator-scripts\") pod \"nova-cell0-db-create-zxht6\" (UID: \"a6ae5264-a3f4-4f05-b7ff-942b182ee6e6\") " pod="openstack/nova-cell0-db-create-zxht6" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.558610 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6ae5264-a3f4-4f05-b7ff-942b182ee6e6-operator-scripts\") pod \"nova-cell0-db-create-zxht6\" (UID: \"a6ae5264-a3f4-4f05-b7ff-942b182ee6e6\") " pod="openstack/nova-cell0-db-create-zxht6" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.559424 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aec70d32-3fdc-410f-9d9d-9b108e079cfe-operator-scripts\") pod \"nova-api-a786-account-create-update-jrb5b\" (UID: \"aec70d32-3fdc-410f-9d9d-9b108e079cfe\") " pod="openstack/nova-api-a786-account-create-update-jrb5b" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.559444 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.567223 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qlt9g" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.576694 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvdhg\" (UniqueName: \"kubernetes.io/projected/aec70d32-3fdc-410f-9d9d-9b108e079cfe-kube-api-access-qvdhg\") pod \"nova-api-a786-account-create-update-jrb5b\" (UID: \"aec70d32-3fdc-410f-9d9d-9b108e079cfe\") " pod="openstack/nova-api-a786-account-create-update-jrb5b" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.579238 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fds59\" (UniqueName: \"kubernetes.io/projected/a6ae5264-a3f4-4f05-b7ff-942b182ee6e6-kube-api-access-fds59\") pod \"nova-cell0-db-create-zxht6\" (UID: \"a6ae5264-a3f4-4f05-b7ff-942b182ee6e6\") " pod="openstack/nova-cell0-db-create-zxht6" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.628010 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5405-account-create-update-8fjff"] Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.659561 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7703d71c-4ee9-4495-ab74-0a76c148d377-operator-scripts\") pod \"nova-cell1-db-create-xdsn5\" (UID: \"7703d71c-4ee9-4495-ab74-0a76c148d377\") " pod="openstack/nova-cell1-db-create-xdsn5" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.659719 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b44b5c9c-2c44-4e46-a14f-a8a0c93781d3-operator-scripts\") pod \"nova-cell0-5405-account-create-update-8fjff\" (UID: \"b44b5c9c-2c44-4e46-a14f-a8a0c93781d3\") " pod="openstack/nova-cell0-5405-account-create-update-8fjff" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.659765 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktkjl\" (UniqueName: \"kubernetes.io/projected/7703d71c-4ee9-4495-ab74-0a76c148d377-kube-api-access-ktkjl\") pod \"nova-cell1-db-create-xdsn5\" (UID: \"7703d71c-4ee9-4495-ab74-0a76c148d377\") " pod="openstack/nova-cell1-db-create-xdsn5" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.659842 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcrll\" (UniqueName: \"kubernetes.io/projected/b44b5c9c-2c44-4e46-a14f-a8a0c93781d3-kube-api-access-lcrll\") pod \"nova-cell0-5405-account-create-update-8fjff\" (UID: \"b44b5c9c-2c44-4e46-a14f-a8a0c93781d3\") " pod="openstack/nova-cell0-5405-account-create-update-8fjff" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.669069 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zxht6" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.683012 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a786-account-create-update-jrb5b" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.761340 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcrll\" (UniqueName: \"kubernetes.io/projected/b44b5c9c-2c44-4e46-a14f-a8a0c93781d3-kube-api-access-lcrll\") pod \"nova-cell0-5405-account-create-update-8fjff\" (UID: \"b44b5c9c-2c44-4e46-a14f-a8a0c93781d3\") " pod="openstack/nova-cell0-5405-account-create-update-8fjff" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.761666 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7703d71c-4ee9-4495-ab74-0a76c148d377-operator-scripts\") pod \"nova-cell1-db-create-xdsn5\" (UID: \"7703d71c-4ee9-4495-ab74-0a76c148d377\") " pod="openstack/nova-cell1-db-create-xdsn5" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.761751 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b44b5c9c-2c44-4e46-a14f-a8a0c93781d3-operator-scripts\") pod \"nova-cell0-5405-account-create-update-8fjff\" (UID: \"b44b5c9c-2c44-4e46-a14f-a8a0c93781d3\") " pod="openstack/nova-cell0-5405-account-create-update-8fjff" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.761782 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktkjl\" (UniqueName: \"kubernetes.io/projected/7703d71c-4ee9-4495-ab74-0a76c148d377-kube-api-access-ktkjl\") pod \"nova-cell1-db-create-xdsn5\" (UID: \"7703d71c-4ee9-4495-ab74-0a76c148d377\") " pod="openstack/nova-cell1-db-create-xdsn5" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.763399 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b44b5c9c-2c44-4e46-a14f-a8a0c93781d3-operator-scripts\") pod \"nova-cell0-5405-account-create-update-8fjff\" (UID: \"b44b5c9c-2c44-4e46-a14f-a8a0c93781d3\") " pod="openstack/nova-cell0-5405-account-create-update-8fjff" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.768260 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7703d71c-4ee9-4495-ab74-0a76c148d377-operator-scripts\") pod \"nova-cell1-db-create-xdsn5\" (UID: \"7703d71c-4ee9-4495-ab74-0a76c148d377\") " pod="openstack/nova-cell1-db-create-xdsn5" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.780757 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-2fd4-account-create-update-s9r68"] Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.782404 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2fd4-account-create-update-s9r68" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.792545 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktkjl\" (UniqueName: \"kubernetes.io/projected/7703d71c-4ee9-4495-ab74-0a76c148d377-kube-api-access-ktkjl\") pod \"nova-cell1-db-create-xdsn5\" (UID: \"7703d71c-4ee9-4495-ab74-0a76c148d377\") " pod="openstack/nova-cell1-db-create-xdsn5" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.804770 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.805114 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xdsn5" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.814153 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcrll\" (UniqueName: \"kubernetes.io/projected/b44b5c9c-2c44-4e46-a14f-a8a0c93781d3-kube-api-access-lcrll\") pod \"nova-cell0-5405-account-create-update-8fjff\" (UID: \"b44b5c9c-2c44-4e46-a14f-a8a0c93781d3\") " pod="openstack/nova-cell0-5405-account-create-update-8fjff" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.819902 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5405-account-create-update-8fjff" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.827161 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-2fd4-account-create-update-s9r68"] Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.868731 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9cbq\" (UniqueName: \"kubernetes.io/projected/20264fab-dfb6-4e8c-90c3-755f6877b798-kube-api-access-n9cbq\") pod \"nova-cell1-2fd4-account-create-update-s9r68\" (UID: \"20264fab-dfb6-4e8c-90c3-755f6877b798\") " pod="openstack/nova-cell1-2fd4-account-create-update-s9r68" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.869083 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20264fab-dfb6-4e8c-90c3-755f6877b798-operator-scripts\") pod \"nova-cell1-2fd4-account-create-update-s9r68\" (UID: \"20264fab-dfb6-4e8c-90c3-755f6877b798\") " pod="openstack/nova-cell1-2fd4-account-create-update-s9r68" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.910656 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-76d44d77c9-sdq6t"] Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.912247 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.916191 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.916336 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.916374 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.921262 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-76d44d77c9-sdq6t"] Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.971226 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20264fab-dfb6-4e8c-90c3-755f6877b798-operator-scripts\") pod \"nova-cell1-2fd4-account-create-update-s9r68\" (UID: \"20264fab-dfb6-4e8c-90c3-755f6877b798\") " pod="openstack/nova-cell1-2fd4-account-create-update-s9r68" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.971336 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9cbq\" (UniqueName: \"kubernetes.io/projected/20264fab-dfb6-4e8c-90c3-755f6877b798-kube-api-access-n9cbq\") pod \"nova-cell1-2fd4-account-create-update-s9r68\" (UID: \"20264fab-dfb6-4e8c-90c3-755f6877b798\") " pod="openstack/nova-cell1-2fd4-account-create-update-s9r68" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.972500 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20264fab-dfb6-4e8c-90c3-755f6877b798-operator-scripts\") pod \"nova-cell1-2fd4-account-create-update-s9r68\" (UID: \"20264fab-dfb6-4e8c-90c3-755f6877b798\") " pod="openstack/nova-cell1-2fd4-account-create-update-s9r68" Feb 18 19:54:48 crc kubenswrapper[4932]: I0218 19:54:48.994708 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9cbq\" (UniqueName: \"kubernetes.io/projected/20264fab-dfb6-4e8c-90c3-755f6877b798-kube-api-access-n9cbq\") pod \"nova-cell1-2fd4-account-create-update-s9r68\" (UID: \"20264fab-dfb6-4e8c-90c3-755f6877b798\") " pod="openstack/nova-cell1-2fd4-account-create-update-s9r68" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.074028 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d359b774-654c-4532-8f81-e1beddd68479-run-httpd\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.074359 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hzmm\" (UniqueName: \"kubernetes.io/projected/d359b774-654c-4532-8f81-e1beddd68479-kube-api-access-4hzmm\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.074464 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d359b774-654c-4532-8f81-e1beddd68479-public-tls-certs\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.074586 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d359b774-654c-4532-8f81-e1beddd68479-combined-ca-bundle\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.074728 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d359b774-654c-4532-8f81-e1beddd68479-internal-tls-certs\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.074821 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d359b774-654c-4532-8f81-e1beddd68479-etc-swift\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.074929 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d359b774-654c-4532-8f81-e1beddd68479-config-data\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.075022 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d359b774-654c-4532-8f81-e1beddd68479-log-httpd\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.131738 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2fd4-account-create-update-s9r68" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.177126 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d359b774-654c-4532-8f81-e1beddd68479-public-tls-certs\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.177245 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d359b774-654c-4532-8f81-e1beddd68479-combined-ca-bundle\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.177328 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d359b774-654c-4532-8f81-e1beddd68479-internal-tls-certs\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.177350 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d359b774-654c-4532-8f81-e1beddd68479-etc-swift\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.177379 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d359b774-654c-4532-8f81-e1beddd68479-config-data\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.177422 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d359b774-654c-4532-8f81-e1beddd68479-log-httpd\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.177525 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d359b774-654c-4532-8f81-e1beddd68479-run-httpd\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.177594 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hzmm\" (UniqueName: \"kubernetes.io/projected/d359b774-654c-4532-8f81-e1beddd68479-kube-api-access-4hzmm\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.178439 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d359b774-654c-4532-8f81-e1beddd68479-log-httpd\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.188600 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d359b774-654c-4532-8f81-e1beddd68479-internal-tls-certs\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.188822 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d359b774-654c-4532-8f81-e1beddd68479-etc-swift\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.189231 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d359b774-654c-4532-8f81-e1beddd68479-public-tls-certs\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.189229 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d359b774-654c-4532-8f81-e1beddd68479-config-data\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.189592 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d359b774-654c-4532-8f81-e1beddd68479-run-httpd\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.189959 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d359b774-654c-4532-8f81-e1beddd68479-combined-ca-bundle\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.196121 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hzmm\" (UniqueName: \"kubernetes.io/projected/d359b774-654c-4532-8f81-e1beddd68479-kube-api-access-4hzmm\") pod \"swift-proxy-76d44d77c9-sdq6t\" (UID: \"d359b774-654c-4532-8f81-e1beddd68479\") " pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.240588 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.248274 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-qlt9g"] Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.281495 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.289122 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerName="ceilometer-central-agent" containerID="cri-o://b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048" gracePeriod=30 Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.290060 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerName="proxy-httpd" containerID="cri-o://4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c" gracePeriod=30 Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.290250 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerName="sg-core" containerID="cri-o://f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09" gracePeriod=30 Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.290272 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerName="ceilometer-notification-agent" containerID="cri-o://7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41" gracePeriod=30 Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.297198 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qlt9g" event={"ID":"ccc8867f-cb56-47ad-9d08-a25feca678fc","Type":"ContainerStarted","Data":"522c3953c61b5f262db6bc25a0ecf8315469173cbfa7c3d2fd3d78690775ae88"} Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.306840 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.184:3000/\": EOF" Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.336270 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a786-account-create-update-jrb5b"] Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.422130 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-zxht6"] Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.554319 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5405-account-create-update-8fjff"] Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.595720 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-xdsn5"] Feb 18 19:54:49 crc kubenswrapper[4932]: I0218 19:54:49.859912 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-2fd4-account-create-update-s9r68"] Feb 18 19:54:49 crc kubenswrapper[4932]: W0218 19:54:49.874466 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20264fab_dfb6_4e8c_90c3_755f6877b798.slice/crio-052fd6b2aa50cb6fcbae51d38be6c0dd30d9db9bd759de575071ca146e8edf7a WatchSource:0}: Error finding container 052fd6b2aa50cb6fcbae51d38be6c0dd30d9db9bd759de575071ca146e8edf7a: Status 404 returned error can't find the container with id 052fd6b2aa50cb6fcbae51d38be6c0dd30d9db9bd759de575071ca146e8edf7a Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.181055 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-76d44d77c9-sdq6t"] Feb 18 19:54:50 crc kubenswrapper[4932]: W0218 19:54:50.237672 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd359b774_654c_4532_8f81_e1beddd68479.slice/crio-cd00a3bf7f934316cd7886a97fd96cace53f2b25ac02b0bb0a6004bf3ecc2428 WatchSource:0}: Error finding container cd00a3bf7f934316cd7886a97fd96cace53f2b25ac02b0bb0a6004bf3ecc2428: Status 404 returned error can't find the container with id cd00a3bf7f934316cd7886a97fd96cace53f2b25ac02b0bb0a6004bf3ecc2428 Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.314026 4932 generic.go:334] "Generic (PLEG): container finished" podID="aec70d32-3fdc-410f-9d9d-9b108e079cfe" containerID="a9bd3203306587d945952a2d8b8a38aa992a6b26567d9b7e7b075edf3005412d" exitCode=0 Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.314109 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a786-account-create-update-jrb5b" event={"ID":"aec70d32-3fdc-410f-9d9d-9b108e079cfe","Type":"ContainerDied","Data":"a9bd3203306587d945952a2d8b8a38aa992a6b26567d9b7e7b075edf3005412d"} Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.314134 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a786-account-create-update-jrb5b" event={"ID":"aec70d32-3fdc-410f-9d9d-9b108e079cfe","Type":"ContainerStarted","Data":"eccf3794a52ee544f0c22944383f1094d40cfa00baf41cfaf87d8812fbfa11b9"} Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.317163 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5405-account-create-update-8fjff" event={"ID":"b44b5c9c-2c44-4e46-a14f-a8a0c93781d3","Type":"ContainerStarted","Data":"5ccb855943775d6e9adaf49444e172677634a8b560d436edfff1c39a86a31e48"} Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.317212 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5405-account-create-update-8fjff" event={"ID":"b44b5c9c-2c44-4e46-a14f-a8a0c93781d3","Type":"ContainerStarted","Data":"66cc7eba623075a858422eb55af26df80c38d6f6aee87f6f13279af0d186f3b3"} Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.332533 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xdsn5" event={"ID":"7703d71c-4ee9-4495-ab74-0a76c148d377","Type":"ContainerStarted","Data":"2eac601de5fc1220879b1962da46431b85d3f67bca44ebc6031ccc59809d3f58"} Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.332589 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xdsn5" event={"ID":"7703d71c-4ee9-4495-ab74-0a76c148d377","Type":"ContainerStarted","Data":"0fd5d1eb515a389872d9f7400736a47e9170b5a4b1480bff777bfe89c3983124"} Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.359499 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.363385 4932 generic.go:334] "Generic (PLEG): container finished" podID="a6ae5264-a3f4-4f05-b7ff-942b182ee6e6" containerID="708bd68c17f2cb8bb6aefdb45fc9ab2a2b088e8be75ba3d7c52b1b8b365c0f1f" exitCode=0 Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.363438 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zxht6" event={"ID":"a6ae5264-a3f4-4f05-b7ff-942b182ee6e6","Type":"ContainerDied","Data":"708bd68c17f2cb8bb6aefdb45fc9ab2a2b088e8be75ba3d7c52b1b8b365c0f1f"} Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.363460 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zxht6" event={"ID":"a6ae5264-a3f4-4f05-b7ff-942b182ee6e6","Type":"ContainerStarted","Data":"2b9d7295e4991fa83586cb008679930ba6602febda912ae2e974202145b5bda9"} Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.369365 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-76d44d77c9-sdq6t" event={"ID":"d359b774-654c-4532-8f81-e1beddd68479","Type":"ContainerStarted","Data":"cd00a3bf7f934316cd7886a97fd96cace53f2b25ac02b0bb0a6004bf3ecc2428"} Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.375335 4932 generic.go:334] "Generic (PLEG): container finished" podID="ccc8867f-cb56-47ad-9d08-a25feca678fc" containerID="561bed36cff9fe4632c1003655b4ef598d4e8ea47f27f52a6c7b3f87e135ec7f" exitCode=0 Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.375370 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qlt9g" event={"ID":"ccc8867f-cb56-47ad-9d08-a25feca678fc","Type":"ContainerDied","Data":"561bed36cff9fe4632c1003655b4ef598d4e8ea47f27f52a6c7b3f87e135ec7f"} Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.388926 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-5405-account-create-update-8fjff" podStartSLOduration=2.388912498 podStartE2EDuration="2.388912498s" podCreationTimestamp="2026-02-18 19:54:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:50.373148688 +0000 UTC m=+1253.955103533" watchObservedRunningTime="2026-02-18 19:54:50.388912498 +0000 UTC m=+1253.970867343" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.389232 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-xdsn5" podStartSLOduration=2.389227246 podStartE2EDuration="2.389227246s" podCreationTimestamp="2026-02-18 19:54:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:50.387121104 +0000 UTC m=+1253.969075949" watchObservedRunningTime="2026-02-18 19:54:50.389227246 +0000 UTC m=+1253.971182091" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.394339 4932 generic.go:334] "Generic (PLEG): container finished" podID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerID="4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c" exitCode=0 Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.394360 4932 generic.go:334] "Generic (PLEG): container finished" podID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerID="f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09" exitCode=2 Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.394369 4932 generic.go:334] "Generic (PLEG): container finished" podID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerID="7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41" exitCode=0 Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.394375 4932 generic.go:334] "Generic (PLEG): container finished" podID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerID="b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048" exitCode=0 Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.394417 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f81248d0-bf30-4447-ad78-7bfe9048bbea","Type":"ContainerDied","Data":"4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c"} Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.394446 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f81248d0-bf30-4447-ad78-7bfe9048bbea","Type":"ContainerDied","Data":"f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09"} Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.394456 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f81248d0-bf30-4447-ad78-7bfe9048bbea","Type":"ContainerDied","Data":"7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41"} Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.394451 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.394475 4932 scope.go:117] "RemoveContainer" containerID="4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.394465 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f81248d0-bf30-4447-ad78-7bfe9048bbea","Type":"ContainerDied","Data":"b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048"} Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.397050 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2fd4-account-create-update-s9r68" event={"ID":"20264fab-dfb6-4e8c-90c3-755f6877b798","Type":"ContainerStarted","Data":"052fd6b2aa50cb6fcbae51d38be6c0dd30d9db9bd759de575071ca146e8edf7a"} Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.412621 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-sg-core-conf-yaml\") pod \"f81248d0-bf30-4447-ad78-7bfe9048bbea\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.412704 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-scripts\") pod \"f81248d0-bf30-4447-ad78-7bfe9048bbea\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.412726 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-combined-ca-bundle\") pod \"f81248d0-bf30-4447-ad78-7bfe9048bbea\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.412834 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f81248d0-bf30-4447-ad78-7bfe9048bbea-run-httpd\") pod \"f81248d0-bf30-4447-ad78-7bfe9048bbea\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.412971 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5dzv\" (UniqueName: \"kubernetes.io/projected/f81248d0-bf30-4447-ad78-7bfe9048bbea-kube-api-access-x5dzv\") pod \"f81248d0-bf30-4447-ad78-7bfe9048bbea\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.412996 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-config-data\") pod \"f81248d0-bf30-4447-ad78-7bfe9048bbea\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.413018 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f81248d0-bf30-4447-ad78-7bfe9048bbea-log-httpd\") pod \"f81248d0-bf30-4447-ad78-7bfe9048bbea\" (UID: \"f81248d0-bf30-4447-ad78-7bfe9048bbea\") " Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.414872 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f81248d0-bf30-4447-ad78-7bfe9048bbea-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f81248d0-bf30-4447-ad78-7bfe9048bbea" (UID: "f81248d0-bf30-4447-ad78-7bfe9048bbea"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.418235 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f81248d0-bf30-4447-ad78-7bfe9048bbea-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f81248d0-bf30-4447-ad78-7bfe9048bbea" (UID: "f81248d0-bf30-4447-ad78-7bfe9048bbea"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.418523 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-scripts" (OuterVolumeSpecName: "scripts") pod "f81248d0-bf30-4447-ad78-7bfe9048bbea" (UID: "f81248d0-bf30-4447-ad78-7bfe9048bbea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.421821 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f81248d0-bf30-4447-ad78-7bfe9048bbea-kube-api-access-x5dzv" (OuterVolumeSpecName: "kube-api-access-x5dzv") pod "f81248d0-bf30-4447-ad78-7bfe9048bbea" (UID: "f81248d0-bf30-4447-ad78-7bfe9048bbea"). InnerVolumeSpecName "kube-api-access-x5dzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.450377 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f81248d0-bf30-4447-ad78-7bfe9048bbea" (UID: "f81248d0-bf30-4447-ad78-7bfe9048bbea"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.457205 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-2fd4-account-create-update-s9r68" podStartSLOduration=2.457187361 podStartE2EDuration="2.457187361s" podCreationTimestamp="2026-02-18 19:54:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:50.443519743 +0000 UTC m=+1254.025474588" watchObservedRunningTime="2026-02-18 19:54:50.457187361 +0000 UTC m=+1254.039142206" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.487855 4932 scope.go:117] "RemoveContainer" containerID="f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.515327 4932 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f81248d0-bf30-4447-ad78-7bfe9048bbea-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.515361 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5dzv\" (UniqueName: \"kubernetes.io/projected/f81248d0-bf30-4447-ad78-7bfe9048bbea-kube-api-access-x5dzv\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.515372 4932 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f81248d0-bf30-4447-ad78-7bfe9048bbea-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.515380 4932 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.515388 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.592320 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-config-data" (OuterVolumeSpecName: "config-data") pod "f81248d0-bf30-4447-ad78-7bfe9048bbea" (UID: "f81248d0-bf30-4447-ad78-7bfe9048bbea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.617543 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.618700 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f81248d0-bf30-4447-ad78-7bfe9048bbea" (UID: "f81248d0-bf30-4447-ad78-7bfe9048bbea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.703187 4932 scope.go:117] "RemoveContainer" containerID="7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.718782 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f81248d0-bf30-4447-ad78-7bfe9048bbea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.732602 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.749158 4932 scope.go:117] "RemoveContainer" containerID="b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.751355 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.767525 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:54:50 crc kubenswrapper[4932]: E0218 19:54:50.768038 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerName="ceilometer-central-agent" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.768062 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerName="ceilometer-central-agent" Feb 18 19:54:50 crc kubenswrapper[4932]: E0218 19:54:50.768082 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerName="proxy-httpd" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.768091 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerName="proxy-httpd" Feb 18 19:54:50 crc kubenswrapper[4932]: E0218 19:54:50.768105 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerName="ceilometer-notification-agent" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.768113 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerName="ceilometer-notification-agent" Feb 18 19:54:50 crc kubenswrapper[4932]: E0218 19:54:50.768157 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerName="sg-core" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.768165 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerName="sg-core" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.768421 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerName="sg-core" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.768447 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerName="ceilometer-notification-agent" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.768474 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerName="proxy-httpd" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.768493 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" containerName="ceilometer-central-agent" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.782444 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.782553 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.784708 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.784781 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.792546 4932 scope.go:117] "RemoveContainer" containerID="4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c" Feb 18 19:54:50 crc kubenswrapper[4932]: E0218 19:54:50.794149 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c\": container with ID starting with 4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c not found: ID does not exist" containerID="4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.794190 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c"} err="failed to get container status \"4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c\": rpc error: code = NotFound desc = could not find container \"4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c\": container with ID starting with 4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c not found: ID does not exist" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.794212 4932 scope.go:117] "RemoveContainer" containerID="f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09" Feb 18 19:54:50 crc kubenswrapper[4932]: E0218 19:54:50.794592 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09\": container with ID starting with f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09 not found: ID does not exist" containerID="f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.794613 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09"} err="failed to get container status \"f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09\": rpc error: code = NotFound desc = could not find container \"f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09\": container with ID starting with f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09 not found: ID does not exist" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.794625 4932 scope.go:117] "RemoveContainer" containerID="7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41" Feb 18 19:54:50 crc kubenswrapper[4932]: E0218 19:54:50.795530 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41\": container with ID starting with 7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41 not found: ID does not exist" containerID="7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.795552 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41"} err="failed to get container status \"7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41\": rpc error: code = NotFound desc = could not find container \"7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41\": container with ID starting with 7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41 not found: ID does not exist" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.795564 4932 scope.go:117] "RemoveContainer" containerID="b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048" Feb 18 19:54:50 crc kubenswrapper[4932]: E0218 19:54:50.797240 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048\": container with ID starting with b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048 not found: ID does not exist" containerID="b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.797332 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048"} err="failed to get container status \"b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048\": rpc error: code = NotFound desc = could not find container \"b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048\": container with ID starting with b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048 not found: ID does not exist" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.797377 4932 scope.go:117] "RemoveContainer" containerID="4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.798853 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c"} err="failed to get container status \"4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c\": rpc error: code = NotFound desc = could not find container \"4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c\": container with ID starting with 4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c not found: ID does not exist" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.798875 4932 scope.go:117] "RemoveContainer" containerID="f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.799351 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09"} err="failed to get container status \"f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09\": rpc error: code = NotFound desc = could not find container \"f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09\": container with ID starting with f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09 not found: ID does not exist" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.799369 4932 scope.go:117] "RemoveContainer" containerID="7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.799524 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41"} err="failed to get container status \"7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41\": rpc error: code = NotFound desc = could not find container \"7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41\": container with ID starting with 7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41 not found: ID does not exist" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.799539 4932 scope.go:117] "RemoveContainer" containerID="b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.800256 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048"} err="failed to get container status \"b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048\": rpc error: code = NotFound desc = could not find container \"b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048\": container with ID starting with b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048 not found: ID does not exist" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.800297 4932 scope.go:117] "RemoveContainer" containerID="4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.804380 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c"} err="failed to get container status \"4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c\": rpc error: code = NotFound desc = could not find container \"4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c\": container with ID starting with 4b584618f751739fac22fcee3fef0fa91b1406db6d5c9f5e995cfa2b57003b7c not found: ID does not exist" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.804424 4932 scope.go:117] "RemoveContainer" containerID="f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.807289 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09"} err="failed to get container status \"f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09\": rpc error: code = NotFound desc = could not find container \"f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09\": container with ID starting with f7dfb483714b9ddd2751271da618937b1abd4e91f277de2ff2f54b9003b8fb09 not found: ID does not exist" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.807340 4932 scope.go:117] "RemoveContainer" containerID="7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.807789 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41"} err="failed to get container status \"7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41\": rpc error: code = NotFound desc = could not find container \"7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41\": container with ID starting with 7263eaa4c7232a694c0f65e74aa115b2d5da146e9c80e2137ed47dc403d14b41 not found: ID does not exist" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.807829 4932 scope.go:117] "RemoveContainer" containerID="b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.809156 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048"} err="failed to get container status \"b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048\": rpc error: code = NotFound desc = could not find container \"b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048\": container with ID starting with b40e946d267982ed3517c8787a70027ff613e0ceaf23caa8346215ee2f505048 not found: ID does not exist" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.821110 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.821166 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42f96153-201b-4efb-952d-ec27dcbd8c0c-log-httpd\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.821220 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.821286 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tm9j\" (UniqueName: \"kubernetes.io/projected/42f96153-201b-4efb-952d-ec27dcbd8c0c-kube-api-access-8tm9j\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.821393 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42f96153-201b-4efb-952d-ec27dcbd8c0c-run-httpd\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.821428 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-config-data\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.821459 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-scripts\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.923704 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42f96153-201b-4efb-952d-ec27dcbd8c0c-log-httpd\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.923781 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.923819 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tm9j\" (UniqueName: \"kubernetes.io/projected/42f96153-201b-4efb-952d-ec27dcbd8c0c-kube-api-access-8tm9j\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.923904 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42f96153-201b-4efb-952d-ec27dcbd8c0c-run-httpd\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.923926 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-config-data\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.923970 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-scripts\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.924024 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.924854 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42f96153-201b-4efb-952d-ec27dcbd8c0c-log-httpd\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.925732 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42f96153-201b-4efb-952d-ec27dcbd8c0c-run-httpd\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.929970 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.932026 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-config-data\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.932841 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.932950 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-scripts\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:50 crc kubenswrapper[4932]: I0218 19:54:50.939043 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tm9j\" (UniqueName: \"kubernetes.io/projected/42f96153-201b-4efb-952d-ec27dcbd8c0c-kube-api-access-8tm9j\") pod \"ceilometer-0\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " pod="openstack/ceilometer-0" Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.139242 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.211055 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f81248d0-bf30-4447-ad78-7bfe9048bbea" path="/var/lib/kubelet/pods/f81248d0-bf30-4447-ad78-7bfe9048bbea/volumes" Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.409963 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.417676 4932 generic.go:334] "Generic (PLEG): container finished" podID="20264fab-dfb6-4e8c-90c3-755f6877b798" containerID="35671852602ab05670d4f45f3855e4d52f08702c9d127db3894e27656cb622ec" exitCode=0 Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.417713 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2fd4-account-create-update-s9r68" event={"ID":"20264fab-dfb6-4e8c-90c3-755f6877b798","Type":"ContainerDied","Data":"35671852602ab05670d4f45f3855e4d52f08702c9d127db3894e27656cb622ec"} Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.423010 4932 generic.go:334] "Generic (PLEG): container finished" podID="b44b5c9c-2c44-4e46-a14f-a8a0c93781d3" containerID="5ccb855943775d6e9adaf49444e172677634a8b560d436edfff1c39a86a31e48" exitCode=0 Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.423075 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5405-account-create-update-8fjff" event={"ID":"b44b5c9c-2c44-4e46-a14f-a8a0c93781d3","Type":"ContainerDied","Data":"5ccb855943775d6e9adaf49444e172677634a8b560d436edfff1c39a86a31e48"} Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.431395 4932 generic.go:334] "Generic (PLEG): container finished" podID="7703d71c-4ee9-4495-ab74-0a76c148d377" containerID="2eac601de5fc1220879b1962da46431b85d3f67bca44ebc6031ccc59809d3f58" exitCode=0 Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.431479 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xdsn5" event={"ID":"7703d71c-4ee9-4495-ab74-0a76c148d377","Type":"ContainerDied","Data":"2eac601de5fc1220879b1962da46431b85d3f67bca44ebc6031ccc59809d3f58"} Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.456884 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-76d44d77c9-sdq6t" event={"ID":"d359b774-654c-4532-8f81-e1beddd68479","Type":"ContainerStarted","Data":"9ef9fe7fd0dd824a4a8a97997a3b05a087e8e402d080eb76fa8c5145581ddd86"} Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.456963 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-76d44d77c9-sdq6t" event={"ID":"d359b774-654c-4532-8f81-e1beddd68479","Type":"ContainerStarted","Data":"828c4029ea8fb0715354ae424762b45e3d362d72ff1a5f3ab9d98154f78c36b0"} Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.458018 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.458048 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.522761 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-76d44d77c9-sdq6t" podStartSLOduration=3.522746574 podStartE2EDuration="3.522746574s" podCreationTimestamp="2026-02-18 19:54:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:54:51.50001009 +0000 UTC m=+1255.081964935" watchObservedRunningTime="2026-02-18 19:54:51.522746574 +0000 UTC m=+1255.104701419" Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.610610 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.905041 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a786-account-create-update-jrb5b" Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.948748 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aec70d32-3fdc-410f-9d9d-9b108e079cfe-operator-scripts\") pod \"aec70d32-3fdc-410f-9d9d-9b108e079cfe\" (UID: \"aec70d32-3fdc-410f-9d9d-9b108e079cfe\") " Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.948785 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvdhg\" (UniqueName: \"kubernetes.io/projected/aec70d32-3fdc-410f-9d9d-9b108e079cfe-kube-api-access-qvdhg\") pod \"aec70d32-3fdc-410f-9d9d-9b108e079cfe\" (UID: \"aec70d32-3fdc-410f-9d9d-9b108e079cfe\") " Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.954297 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aec70d32-3fdc-410f-9d9d-9b108e079cfe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aec70d32-3fdc-410f-9d9d-9b108e079cfe" (UID: "aec70d32-3fdc-410f-9d9d-9b108e079cfe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.954869 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aec70d32-3fdc-410f-9d9d-9b108e079cfe-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:51 crc kubenswrapper[4932]: I0218 19:54:51.961415 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aec70d32-3fdc-410f-9d9d-9b108e079cfe-kube-api-access-qvdhg" (OuterVolumeSpecName: "kube-api-access-qvdhg") pod "aec70d32-3fdc-410f-9d9d-9b108e079cfe" (UID: "aec70d32-3fdc-410f-9d9d-9b108e079cfe"). InnerVolumeSpecName "kube-api-access-qvdhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.047619 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zxht6" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.056238 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvdhg\" (UniqueName: \"kubernetes.io/projected/aec70d32-3fdc-410f-9d9d-9b108e079cfe-kube-api-access-qvdhg\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.059212 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qlt9g" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.172312 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccc8867f-cb56-47ad-9d08-a25feca678fc-operator-scripts\") pod \"ccc8867f-cb56-47ad-9d08-a25feca678fc\" (UID: \"ccc8867f-cb56-47ad-9d08-a25feca678fc\") " Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.172438 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fds59\" (UniqueName: \"kubernetes.io/projected/a6ae5264-a3f4-4f05-b7ff-942b182ee6e6-kube-api-access-fds59\") pod \"a6ae5264-a3f4-4f05-b7ff-942b182ee6e6\" (UID: \"a6ae5264-a3f4-4f05-b7ff-942b182ee6e6\") " Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.172514 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdj9x\" (UniqueName: \"kubernetes.io/projected/ccc8867f-cb56-47ad-9d08-a25feca678fc-kube-api-access-fdj9x\") pod \"ccc8867f-cb56-47ad-9d08-a25feca678fc\" (UID: \"ccc8867f-cb56-47ad-9d08-a25feca678fc\") " Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.172656 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6ae5264-a3f4-4f05-b7ff-942b182ee6e6-operator-scripts\") pod \"a6ae5264-a3f4-4f05-b7ff-942b182ee6e6\" (UID: \"a6ae5264-a3f4-4f05-b7ff-942b182ee6e6\") " Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.180480 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6ae5264-a3f4-4f05-b7ff-942b182ee6e6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a6ae5264-a3f4-4f05-b7ff-942b182ee6e6" (UID: "a6ae5264-a3f4-4f05-b7ff-942b182ee6e6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.180563 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccc8867f-cb56-47ad-9d08-a25feca678fc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ccc8867f-cb56-47ad-9d08-a25feca678fc" (UID: "ccc8867f-cb56-47ad-9d08-a25feca678fc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.184726 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6ae5264-a3f4-4f05-b7ff-942b182ee6e6-kube-api-access-fds59" (OuterVolumeSpecName: "kube-api-access-fds59") pod "a6ae5264-a3f4-4f05-b7ff-942b182ee6e6" (UID: "a6ae5264-a3f4-4f05-b7ff-942b182ee6e6"). InnerVolumeSpecName "kube-api-access-fds59". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.193548 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccc8867f-cb56-47ad-9d08-a25feca678fc-kube-api-access-fdj9x" (OuterVolumeSpecName: "kube-api-access-fdj9x") pod "ccc8867f-cb56-47ad-9d08-a25feca678fc" (UID: "ccc8867f-cb56-47ad-9d08-a25feca678fc"). InnerVolumeSpecName "kube-api-access-fdj9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.274738 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6ae5264-a3f4-4f05-b7ff-942b182ee6e6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.274785 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccc8867f-cb56-47ad-9d08-a25feca678fc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.274799 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fds59\" (UniqueName: \"kubernetes.io/projected/a6ae5264-a3f4-4f05-b7ff-942b182ee6e6-kube-api-access-fds59\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.274814 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdj9x\" (UniqueName: \"kubernetes.io/projected/ccc8867f-cb56-47ad-9d08-a25feca678fc-kube-api-access-fdj9x\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.483665 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a786-account-create-update-jrb5b" event={"ID":"aec70d32-3fdc-410f-9d9d-9b108e079cfe","Type":"ContainerDied","Data":"eccf3794a52ee544f0c22944383f1094d40cfa00baf41cfaf87d8812fbfa11b9"} Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.483712 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eccf3794a52ee544f0c22944383f1094d40cfa00baf41cfaf87d8812fbfa11b9" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.483783 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a786-account-create-update-jrb5b" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.489829 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42f96153-201b-4efb-952d-ec27dcbd8c0c","Type":"ContainerStarted","Data":"06ac8abe3739afaf69ebc58f3baaf3e27bfecb005ce000a65602459c58cfcb6e"} Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.489871 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42f96153-201b-4efb-952d-ec27dcbd8c0c","Type":"ContainerStarted","Data":"3d2064737241a9f7bf6098cc357b389e019add37b83420b1cfe158e700514b8a"} Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.489879 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42f96153-201b-4efb-952d-ec27dcbd8c0c","Type":"ContainerStarted","Data":"bb80acf58868f86869a2edd8ebddc1372e30bf85bb8346fbc78e3b03f8adb9d4"} Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.497617 4932 generic.go:334] "Generic (PLEG): container finished" podID="dec0e208-2bfc-4661-8395-c56418bb9307" containerID="8938c10b66b4f6d7e20437bee59ce3c16a7181c0a809f3e865b01b219862d8d7" exitCode=137 Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.497672 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75df984768-5mv9k" event={"ID":"dec0e208-2bfc-4661-8395-c56418bb9307","Type":"ContainerDied","Data":"8938c10b66b4f6d7e20437bee59ce3c16a7181c0a809f3e865b01b219862d8d7"} Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.514347 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zxht6" event={"ID":"a6ae5264-a3f4-4f05-b7ff-942b182ee6e6","Type":"ContainerDied","Data":"2b9d7295e4991fa83586cb008679930ba6602febda912ae2e974202145b5bda9"} Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.514389 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b9d7295e4991fa83586cb008679930ba6602febda912ae2e974202145b5bda9" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.514444 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zxht6" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.532034 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qlt9g" event={"ID":"ccc8867f-cb56-47ad-9d08-a25feca678fc","Type":"ContainerDied","Data":"522c3953c61b5f262db6bc25a0ecf8315469173cbfa7c3d2fd3d78690775ae88"} Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.532093 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="522c3953c61b5f262db6bc25a0ecf8315469173cbfa7c3d2fd3d78690775ae88" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.532046 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qlt9g" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.704048 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.782644 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-horizon-tls-certs\") pod \"dec0e208-2bfc-4661-8395-c56418bb9307\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.782717 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dec0e208-2bfc-4661-8395-c56418bb9307-scripts\") pod \"dec0e208-2bfc-4661-8395-c56418bb9307\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.782855 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dec0e208-2bfc-4661-8395-c56418bb9307-config-data\") pod \"dec0e208-2bfc-4661-8395-c56418bb9307\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.782905 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dec0e208-2bfc-4661-8395-c56418bb9307-logs\") pod \"dec0e208-2bfc-4661-8395-c56418bb9307\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.783025 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-combined-ca-bundle\") pod \"dec0e208-2bfc-4661-8395-c56418bb9307\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.783072 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h566q\" (UniqueName: \"kubernetes.io/projected/dec0e208-2bfc-4661-8395-c56418bb9307-kube-api-access-h566q\") pod \"dec0e208-2bfc-4661-8395-c56418bb9307\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.783113 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-horizon-secret-key\") pod \"dec0e208-2bfc-4661-8395-c56418bb9307\" (UID: \"dec0e208-2bfc-4661-8395-c56418bb9307\") " Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.785286 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dec0e208-2bfc-4661-8395-c56418bb9307-logs" (OuterVolumeSpecName: "logs") pod "dec0e208-2bfc-4661-8395-c56418bb9307" (UID: "dec0e208-2bfc-4661-8395-c56418bb9307"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.792637 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "dec0e208-2bfc-4661-8395-c56418bb9307" (UID: "dec0e208-2bfc-4661-8395-c56418bb9307"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.804447 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dec0e208-2bfc-4661-8395-c56418bb9307-kube-api-access-h566q" (OuterVolumeSpecName: "kube-api-access-h566q") pod "dec0e208-2bfc-4661-8395-c56418bb9307" (UID: "dec0e208-2bfc-4661-8395-c56418bb9307"). InnerVolumeSpecName "kube-api-access-h566q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.820665 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dec0e208-2bfc-4661-8395-c56418bb9307-config-data" (OuterVolumeSpecName: "config-data") pod "dec0e208-2bfc-4661-8395-c56418bb9307" (UID: "dec0e208-2bfc-4661-8395-c56418bb9307"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.875862 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dec0e208-2bfc-4661-8395-c56418bb9307" (UID: "dec0e208-2bfc-4661-8395-c56418bb9307"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.878782 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dec0e208-2bfc-4661-8395-c56418bb9307-scripts" (OuterVolumeSpecName: "scripts") pod "dec0e208-2bfc-4661-8395-c56418bb9307" (UID: "dec0e208-2bfc-4661-8395-c56418bb9307"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.895688 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dec0e208-2bfc-4661-8395-c56418bb9307-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.895725 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dec0e208-2bfc-4661-8395-c56418bb9307-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.895737 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.895751 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h566q\" (UniqueName: \"kubernetes.io/projected/dec0e208-2bfc-4661-8395-c56418bb9307-kube-api-access-h566q\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.895764 4932 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.895774 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dec0e208-2bfc-4661-8395-c56418bb9307-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.948533 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2fd4-account-create-update-s9r68" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.970950 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "dec0e208-2bfc-4661-8395-c56418bb9307" (UID: "dec0e208-2bfc-4661-8395-c56418bb9307"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.997373 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9cbq\" (UniqueName: \"kubernetes.io/projected/20264fab-dfb6-4e8c-90c3-755f6877b798-kube-api-access-n9cbq\") pod \"20264fab-dfb6-4e8c-90c3-755f6877b798\" (UID: \"20264fab-dfb6-4e8c-90c3-755f6877b798\") " Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.997455 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20264fab-dfb6-4e8c-90c3-755f6877b798-operator-scripts\") pod \"20264fab-dfb6-4e8c-90c3-755f6877b798\" (UID: \"20264fab-dfb6-4e8c-90c3-755f6877b798\") " Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.997999 4932 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/dec0e208-2bfc-4661-8395-c56418bb9307-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:52 crc kubenswrapper[4932]: I0218 19:54:52.998370 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20264fab-dfb6-4e8c-90c3-755f6877b798-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "20264fab-dfb6-4e8c-90c3-755f6877b798" (UID: "20264fab-dfb6-4e8c-90c3-755f6877b798"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.006410 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20264fab-dfb6-4e8c-90c3-755f6877b798-kube-api-access-n9cbq" (OuterVolumeSpecName: "kube-api-access-n9cbq") pod "20264fab-dfb6-4e8c-90c3-755f6877b798" (UID: "20264fab-dfb6-4e8c-90c3-755f6877b798"). InnerVolumeSpecName "kube-api-access-n9cbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.045997 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5405-account-create-update-8fjff" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.078561 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xdsn5" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.099016 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcrll\" (UniqueName: \"kubernetes.io/projected/b44b5c9c-2c44-4e46-a14f-a8a0c93781d3-kube-api-access-lcrll\") pod \"b44b5c9c-2c44-4e46-a14f-a8a0c93781d3\" (UID: \"b44b5c9c-2c44-4e46-a14f-a8a0c93781d3\") " Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.103231 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b44b5c9c-2c44-4e46-a14f-a8a0c93781d3-operator-scripts\") pod \"b44b5c9c-2c44-4e46-a14f-a8a0c93781d3\" (UID: \"b44b5c9c-2c44-4e46-a14f-a8a0c93781d3\") " Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.103930 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b44b5c9c-2c44-4e46-a14f-a8a0c93781d3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b44b5c9c-2c44-4e46-a14f-a8a0c93781d3" (UID: "b44b5c9c-2c44-4e46-a14f-a8a0c93781d3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.105320 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9cbq\" (UniqueName: \"kubernetes.io/projected/20264fab-dfb6-4e8c-90c3-755f6877b798-kube-api-access-n9cbq\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.105344 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20264fab-dfb6-4e8c-90c3-755f6877b798-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.105353 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b44b5c9c-2c44-4e46-a14f-a8a0c93781d3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.105608 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b44b5c9c-2c44-4e46-a14f-a8a0c93781d3-kube-api-access-lcrll" (OuterVolumeSpecName: "kube-api-access-lcrll") pod "b44b5c9c-2c44-4e46-a14f-a8a0c93781d3" (UID: "b44b5c9c-2c44-4e46-a14f-a8a0c93781d3"). InnerVolumeSpecName "kube-api-access-lcrll". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.209113 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktkjl\" (UniqueName: \"kubernetes.io/projected/7703d71c-4ee9-4495-ab74-0a76c148d377-kube-api-access-ktkjl\") pod \"7703d71c-4ee9-4495-ab74-0a76c148d377\" (UID: \"7703d71c-4ee9-4495-ab74-0a76c148d377\") " Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.209546 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7703d71c-4ee9-4495-ab74-0a76c148d377-operator-scripts\") pod \"7703d71c-4ee9-4495-ab74-0a76c148d377\" (UID: \"7703d71c-4ee9-4495-ab74-0a76c148d377\") " Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.210534 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcrll\" (UniqueName: \"kubernetes.io/projected/b44b5c9c-2c44-4e46-a14f-a8a0c93781d3-kube-api-access-lcrll\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.210925 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7703d71c-4ee9-4495-ab74-0a76c148d377-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7703d71c-4ee9-4495-ab74-0a76c148d377" (UID: "7703d71c-4ee9-4495-ab74-0a76c148d377"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.213656 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7703d71c-4ee9-4495-ab74-0a76c148d377-kube-api-access-ktkjl" (OuterVolumeSpecName: "kube-api-access-ktkjl") pod "7703d71c-4ee9-4495-ab74-0a76c148d377" (UID: "7703d71c-4ee9-4495-ab74-0a76c148d377"). InnerVolumeSpecName "kube-api-access-ktkjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.313286 4932 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7703d71c-4ee9-4495-ab74-0a76c148d377-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.313340 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktkjl\" (UniqueName: \"kubernetes.io/projected/7703d71c-4ee9-4495-ab74-0a76c148d377-kube-api-access-ktkjl\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.545401 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2fd4-account-create-update-s9r68" event={"ID":"20264fab-dfb6-4e8c-90c3-755f6877b798","Type":"ContainerDied","Data":"052fd6b2aa50cb6fcbae51d38be6c0dd30d9db9bd759de575071ca146e8edf7a"} Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.545467 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="052fd6b2aa50cb6fcbae51d38be6c0dd30d9db9bd759de575071ca146e8edf7a" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.545564 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2fd4-account-create-update-s9r68" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.548521 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42f96153-201b-4efb-952d-ec27dcbd8c0c","Type":"ContainerStarted","Data":"ce107b52c8f3445b9ec76a561859b862dc62dc9976d0d2ac59adfc9855cf6bd0"} Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.551482 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75df984768-5mv9k" event={"ID":"dec0e208-2bfc-4661-8395-c56418bb9307","Type":"ContainerDied","Data":"0a23db9200dc7e24b7810e1e26b3a65a213a638cce894066f30cf730bad21368"} Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.551597 4932 scope.go:117] "RemoveContainer" containerID="c14c2db9c2e97146ded5c1be64f375a20e4d3dc8027f2eb556b8226700b572e9" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.551788 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75df984768-5mv9k" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.556112 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5405-account-create-update-8fjff" event={"ID":"b44b5c9c-2c44-4e46-a14f-a8a0c93781d3","Type":"ContainerDied","Data":"66cc7eba623075a858422eb55af26df80c38d6f6aee87f6f13279af0d186f3b3"} Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.556152 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66cc7eba623075a858422eb55af26df80c38d6f6aee87f6f13279af0d186f3b3" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.556152 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5405-account-create-update-8fjff" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.566261 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xdsn5" event={"ID":"7703d71c-4ee9-4495-ab74-0a76c148d377","Type":"ContainerDied","Data":"0fd5d1eb515a389872d9f7400736a47e9170b5a4b1480bff777bfe89c3983124"} Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.566318 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fd5d1eb515a389872d9f7400736a47e9170b5a4b1480bff777bfe89c3983124" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.566409 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xdsn5" Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.599155 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-75df984768-5mv9k"] Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.606937 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-75df984768-5mv9k"] Feb 18 19:54:53 crc kubenswrapper[4932]: I0218 19:54:53.753405 4932 scope.go:117] "RemoveContainer" containerID="8938c10b66b4f6d7e20437bee59ce3c16a7181c0a809f3e865b01b219862d8d7" Feb 18 19:54:55 crc kubenswrapper[4932]: I0218 19:54:55.189920 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dec0e208-2bfc-4661-8395-c56418bb9307" path="/var/lib/kubelet/pods/dec0e208-2bfc-4661-8395-c56418bb9307/volumes" Feb 18 19:54:57 crc kubenswrapper[4932]: I0218 19:54:57.640326 4932 generic.go:334] "Generic (PLEG): container finished" podID="30bd9d4f-e84f-4320-9057-80d3d53f7ebb" containerID="ef6f599a3d418b9b543c98edcdf9f0f0c968498f5968cc9b3a5e3260ba0ced73" exitCode=137 Feb 18 19:54:57 crc kubenswrapper[4932]: I0218 19:54:57.640449 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"30bd9d4f-e84f-4320-9057-80d3d53f7ebb","Type":"ContainerDied","Data":"ef6f599a3d418b9b543c98edcdf9f0f0c968498f5968cc9b3a5e3260ba0ced73"} Feb 18 19:54:57 crc kubenswrapper[4932]: I0218 19:54:57.818109 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="30bd9d4f-e84f-4320-9057-80d3d53f7ebb" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.183:8776/healthcheck\": dial tcp 10.217.0.183:8776: connect: connection refused" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.180529 4932 scope.go:117] "RemoveContainer" containerID="ed7bb6ecb9363ec214feff66ba52e868e0754894a65c87ef835d3f25e4d547c0" Feb 18 19:54:58 crc kubenswrapper[4932]: E0218 19:54:58.180763 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(0882c686-1b07-4ac7-a6be-148eff7faa19)\"" pod="openstack/watcher-decision-engine-0" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.810273 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-64b8m"] Feb 18 19:54:58 crc kubenswrapper[4932]: E0218 19:54:58.810648 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b44b5c9c-2c44-4e46-a14f-a8a0c93781d3" containerName="mariadb-account-create-update" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.810661 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="b44b5c9c-2c44-4e46-a14f-a8a0c93781d3" containerName="mariadb-account-create-update" Feb 18 19:54:58 crc kubenswrapper[4932]: E0218 19:54:58.810677 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dec0e208-2bfc-4661-8395-c56418bb9307" containerName="horizon-log" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.810684 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="dec0e208-2bfc-4661-8395-c56418bb9307" containerName="horizon-log" Feb 18 19:54:58 crc kubenswrapper[4932]: E0218 19:54:58.810695 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20264fab-dfb6-4e8c-90c3-755f6877b798" containerName="mariadb-account-create-update" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.810703 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="20264fab-dfb6-4e8c-90c3-755f6877b798" containerName="mariadb-account-create-update" Feb 18 19:54:58 crc kubenswrapper[4932]: E0218 19:54:58.810728 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7703d71c-4ee9-4495-ab74-0a76c148d377" containerName="mariadb-database-create" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.810734 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="7703d71c-4ee9-4495-ab74-0a76c148d377" containerName="mariadb-database-create" Feb 18 19:54:58 crc kubenswrapper[4932]: E0218 19:54:58.810748 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccc8867f-cb56-47ad-9d08-a25feca678fc" containerName="mariadb-database-create" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.810753 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccc8867f-cb56-47ad-9d08-a25feca678fc" containerName="mariadb-database-create" Feb 18 19:54:58 crc kubenswrapper[4932]: E0218 19:54:58.810765 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6ae5264-a3f4-4f05-b7ff-942b182ee6e6" containerName="mariadb-database-create" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.811029 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6ae5264-a3f4-4f05-b7ff-942b182ee6e6" containerName="mariadb-database-create" Feb 18 19:54:58 crc kubenswrapper[4932]: E0218 19:54:58.811040 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aec70d32-3fdc-410f-9d9d-9b108e079cfe" containerName="mariadb-account-create-update" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.811046 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec70d32-3fdc-410f-9d9d-9b108e079cfe" containerName="mariadb-account-create-update" Feb 18 19:54:58 crc kubenswrapper[4932]: E0218 19:54:58.811055 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dec0e208-2bfc-4661-8395-c56418bb9307" containerName="horizon" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.811060 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="dec0e208-2bfc-4661-8395-c56418bb9307" containerName="horizon" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.811228 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="aec70d32-3fdc-410f-9d9d-9b108e079cfe" containerName="mariadb-account-create-update" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.811243 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="dec0e208-2bfc-4661-8395-c56418bb9307" containerName="horizon-log" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.811252 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6ae5264-a3f4-4f05-b7ff-942b182ee6e6" containerName="mariadb-database-create" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.811260 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="dec0e208-2bfc-4661-8395-c56418bb9307" containerName="horizon" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.811274 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccc8867f-cb56-47ad-9d08-a25feca678fc" containerName="mariadb-database-create" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.811282 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="b44b5c9c-2c44-4e46-a14f-a8a0c93781d3" containerName="mariadb-account-create-update" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.811292 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="20264fab-dfb6-4e8c-90c3-755f6877b798" containerName="mariadb-account-create-update" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.811298 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="7703d71c-4ee9-4495-ab74-0a76c148d377" containerName="mariadb-database-create" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.811862 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-64b8m" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.814305 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.814307 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.814845 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rd8q2" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.834384 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-64b8m"] Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.961154 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-config-data\") pod \"nova-cell0-conductor-db-sync-64b8m\" (UID: \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\") " pod="openstack/nova-cell0-conductor-db-sync-64b8m" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.961243 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m25dj\" (UniqueName: \"kubernetes.io/projected/c88334ec-64f6-41ba-aee5-d5323e8c0c25-kube-api-access-m25dj\") pod \"nova-cell0-conductor-db-sync-64b8m\" (UID: \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\") " pod="openstack/nova-cell0-conductor-db-sync-64b8m" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.961318 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-scripts\") pod \"nova-cell0-conductor-db-sync-64b8m\" (UID: \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\") " pod="openstack/nova-cell0-conductor-db-sync-64b8m" Feb 18 19:54:58 crc kubenswrapper[4932]: I0218 19:54:58.961500 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-64b8m\" (UID: \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\") " pod="openstack/nova-cell0-conductor-db-sync-64b8m" Feb 18 19:54:59 crc kubenswrapper[4932]: I0218 19:54:59.063640 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-config-data\") pod \"nova-cell0-conductor-db-sync-64b8m\" (UID: \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\") " pod="openstack/nova-cell0-conductor-db-sync-64b8m" Feb 18 19:54:59 crc kubenswrapper[4932]: I0218 19:54:59.063708 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m25dj\" (UniqueName: \"kubernetes.io/projected/c88334ec-64f6-41ba-aee5-d5323e8c0c25-kube-api-access-m25dj\") pod \"nova-cell0-conductor-db-sync-64b8m\" (UID: \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\") " pod="openstack/nova-cell0-conductor-db-sync-64b8m" Feb 18 19:54:59 crc kubenswrapper[4932]: I0218 19:54:59.063775 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-scripts\") pod \"nova-cell0-conductor-db-sync-64b8m\" (UID: \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\") " pod="openstack/nova-cell0-conductor-db-sync-64b8m" Feb 18 19:54:59 crc kubenswrapper[4932]: I0218 19:54:59.063824 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-64b8m\" (UID: \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\") " pod="openstack/nova-cell0-conductor-db-sync-64b8m" Feb 18 19:54:59 crc kubenswrapper[4932]: I0218 19:54:59.071088 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-64b8m\" (UID: \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\") " pod="openstack/nova-cell0-conductor-db-sync-64b8m" Feb 18 19:54:59 crc kubenswrapper[4932]: I0218 19:54:59.072612 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-scripts\") pod \"nova-cell0-conductor-db-sync-64b8m\" (UID: \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\") " pod="openstack/nova-cell0-conductor-db-sync-64b8m" Feb 18 19:54:59 crc kubenswrapper[4932]: I0218 19:54:59.072625 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-config-data\") pod \"nova-cell0-conductor-db-sync-64b8m\" (UID: \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\") " pod="openstack/nova-cell0-conductor-db-sync-64b8m" Feb 18 19:54:59 crc kubenswrapper[4932]: I0218 19:54:59.082621 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m25dj\" (UniqueName: \"kubernetes.io/projected/c88334ec-64f6-41ba-aee5-d5323e8c0c25-kube-api-access-m25dj\") pod \"nova-cell0-conductor-db-sync-64b8m\" (UID: \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\") " pod="openstack/nova-cell0-conductor-db-sync-64b8m" Feb 18 19:54:59 crc kubenswrapper[4932]: I0218 19:54:59.095650 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-57c4489bcf-qchgn" Feb 18 19:54:59 crc kubenswrapper[4932]: I0218 19:54:59.166732 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-64b8m" Feb 18 19:54:59 crc kubenswrapper[4932]: I0218 19:54:59.168115 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5966846f96-hbrsw"] Feb 18 19:54:59 crc kubenswrapper[4932]: I0218 19:54:59.168314 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5966846f96-hbrsw" podUID="fb1c0405-2770-4a03-ba51-c78005d57ad9" containerName="neutron-api" containerID="cri-o://8814276ab26396b2ae50e791faa4b0371fd71f6cc12e32306c2bd981b3b56f5f" gracePeriod=30 Feb 18 19:54:59 crc kubenswrapper[4932]: I0218 19:54:59.175850 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5966846f96-hbrsw" podUID="fb1c0405-2770-4a03-ba51-c78005d57ad9" containerName="neutron-httpd" containerID="cri-o://44de70ea8243ac6a26a85999f9986d4709163f4632edfdc74e832972fac2ff09" gracePeriod=30 Feb 18 19:54:59 crc kubenswrapper[4932]: I0218 19:54:59.248293 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:59 crc kubenswrapper[4932]: I0218 19:54:59.261274 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-76d44d77c9-sdq6t" Feb 18 19:54:59 crc kubenswrapper[4932]: I0218 19:54:59.666022 4932 generic.go:334] "Generic (PLEG): container finished" podID="fb1c0405-2770-4a03-ba51-c78005d57ad9" containerID="44de70ea8243ac6a26a85999f9986d4709163f4632edfdc74e832972fac2ff09" exitCode=0 Feb 18 19:54:59 crc kubenswrapper[4932]: I0218 19:54:59.667088 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5966846f96-hbrsw" event={"ID":"fb1c0405-2770-4a03-ba51-c78005d57ad9","Type":"ContainerDied","Data":"44de70ea8243ac6a26a85999f9986d4709163f4632edfdc74e832972fac2ff09"} Feb 18 19:55:01 crc kubenswrapper[4932]: I0218 19:55:01.358955 4932 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod5bd90883-79db-4903-87ab-828b9608f9fa"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod5bd90883-79db-4903-87ab-828b9608f9fa] : Timed out while waiting for systemd to remove kubepods-besteffort-pod5bd90883_79db_4903_87ab_828b9608f9fa.slice" Feb 18 19:55:01 crc kubenswrapper[4932]: E0218 19:55:01.359437 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod5bd90883-79db-4903-87ab-828b9608f9fa] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod5bd90883-79db-4903-87ab-828b9608f9fa] : Timed out while waiting for systemd to remove kubepods-besteffort-pod5bd90883_79db_4903_87ab_828b9608f9fa.slice" pod="openstack/watcher-applier-0" podUID="5bd90883-79db-4903-87ab-828b9608f9fa" Feb 18 19:55:01 crc kubenswrapper[4932]: I0218 19:55:01.384105 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:01 crc kubenswrapper[4932]: I0218 19:55:01.687160 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 19:55:01 crc kubenswrapper[4932]: I0218 19:55:01.841627 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:55:01 crc kubenswrapper[4932]: I0218 19:55:01.860533 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:55:01 crc kubenswrapper[4932]: I0218 19:55:01.893767 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:55:01 crc kubenswrapper[4932]: I0218 19:55:01.894958 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 19:55:01 crc kubenswrapper[4932]: I0218 19:55:01.900874 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Feb 18 19:55:01 crc kubenswrapper[4932]: I0218 19:55:01.910460 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.011777 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.022459 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzbg4\" (UniqueName: \"kubernetes.io/projected/c7feb603-1c6f-423f-979e-840070052a6f-kube-api-access-wzbg4\") pod \"watcher-applier-0\" (UID: \"c7feb603-1c6f-423f-979e-840070052a6f\") " pod="openstack/watcher-applier-0" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.022546 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7feb603-1c6f-423f-979e-840070052a6f-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"c7feb603-1c6f-423f-979e-840070052a6f\") " pod="openstack/watcher-applier-0" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.022578 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7feb603-1c6f-423f-979e-840070052a6f-logs\") pod \"watcher-applier-0\" (UID: \"c7feb603-1c6f-423f-979e-840070052a6f\") " pod="openstack/watcher-applier-0" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.022618 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7feb603-1c6f-423f-979e-840070052a6f-config-data\") pod \"watcher-applier-0\" (UID: \"c7feb603-1c6f-423f-979e-840070052a6f\") " pod="openstack/watcher-applier-0" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.124075 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdllf\" (UniqueName: \"kubernetes.io/projected/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-kube-api-access-wdllf\") pod \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.124320 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-etc-machine-id\") pod \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.124344 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-logs\") pod \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.124427 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-scripts\") pod \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.124490 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-config-data\") pod \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.124538 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-config-data-custom\") pod \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.124591 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-combined-ca-bundle\") pod \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\" (UID: \"30bd9d4f-e84f-4320-9057-80d3d53f7ebb\") " Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.124822 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7feb603-1c6f-423f-979e-840070052a6f-config-data\") pod \"watcher-applier-0\" (UID: \"c7feb603-1c6f-423f-979e-840070052a6f\") " pod="openstack/watcher-applier-0" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.124906 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzbg4\" (UniqueName: \"kubernetes.io/projected/c7feb603-1c6f-423f-979e-840070052a6f-kube-api-access-wzbg4\") pod \"watcher-applier-0\" (UID: \"c7feb603-1c6f-423f-979e-840070052a6f\") " pod="openstack/watcher-applier-0" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.124964 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7feb603-1c6f-423f-979e-840070052a6f-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"c7feb603-1c6f-423f-979e-840070052a6f\") " pod="openstack/watcher-applier-0" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.124990 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7feb603-1c6f-423f-979e-840070052a6f-logs\") pod \"watcher-applier-0\" (UID: \"c7feb603-1c6f-423f-979e-840070052a6f\") " pod="openstack/watcher-applier-0" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.126219 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7feb603-1c6f-423f-979e-840070052a6f-logs\") pod \"watcher-applier-0\" (UID: \"c7feb603-1c6f-423f-979e-840070052a6f\") " pod="openstack/watcher-applier-0" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.127330 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "30bd9d4f-e84f-4320-9057-80d3d53f7ebb" (UID: "30bd9d4f-e84f-4320-9057-80d3d53f7ebb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.129596 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-logs" (OuterVolumeSpecName: "logs") pod "30bd9d4f-e84f-4320-9057-80d3d53f7ebb" (UID: "30bd9d4f-e84f-4320-9057-80d3d53f7ebb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.131570 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "30bd9d4f-e84f-4320-9057-80d3d53f7ebb" (UID: "30bd9d4f-e84f-4320-9057-80d3d53f7ebb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.137430 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7feb603-1c6f-423f-979e-840070052a6f-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"c7feb603-1c6f-423f-979e-840070052a6f\") " pod="openstack/watcher-applier-0" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.137569 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-kube-api-access-wdllf" (OuterVolumeSpecName: "kube-api-access-wdllf") pod "30bd9d4f-e84f-4320-9057-80d3d53f7ebb" (UID: "30bd9d4f-e84f-4320-9057-80d3d53f7ebb"). InnerVolumeSpecName "kube-api-access-wdllf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.138083 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7feb603-1c6f-423f-979e-840070052a6f-config-data\") pod \"watcher-applier-0\" (UID: \"c7feb603-1c6f-423f-979e-840070052a6f\") " pod="openstack/watcher-applier-0" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.140433 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-scripts" (OuterVolumeSpecName: "scripts") pod "30bd9d4f-e84f-4320-9057-80d3d53f7ebb" (UID: "30bd9d4f-e84f-4320-9057-80d3d53f7ebb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.152978 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzbg4\" (UniqueName: \"kubernetes.io/projected/c7feb603-1c6f-423f-979e-840070052a6f-kube-api-access-wzbg4\") pod \"watcher-applier-0\" (UID: \"c7feb603-1c6f-423f-979e-840070052a6f\") " pod="openstack/watcher-applier-0" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.169497 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30bd9d4f-e84f-4320-9057-80d3d53f7ebb" (UID: "30bd9d4f-e84f-4320-9057-80d3d53f7ebb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.223205 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-config-data" (OuterVolumeSpecName: "config-data") pod "30bd9d4f-e84f-4320-9057-80d3d53f7ebb" (UID: "30bd9d4f-e84f-4320-9057-80d3d53f7ebb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.225963 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.227118 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdllf\" (UniqueName: \"kubernetes.io/projected/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-kube-api-access-wdllf\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.227138 4932 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.227149 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.227158 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.227167 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.227187 4932 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.227195 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bd9d4f-e84f-4320-9057-80d3d53f7ebb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.288753 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-64b8m"] Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.691537 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.700561 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42f96153-201b-4efb-952d-ec27dcbd8c0c","Type":"ContainerStarted","Data":"0f0525a788d09ab7dbc5b7179e97321ba422fbd170f5197c2447f43debd2c5c7"} Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.700724 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerName="ceilometer-central-agent" containerID="cri-o://3d2064737241a9f7bf6098cc357b389e019add37b83420b1cfe158e700514b8a" gracePeriod=30 Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.700993 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.701258 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerName="proxy-httpd" containerID="cri-o://0f0525a788d09ab7dbc5b7179e97321ba422fbd170f5197c2447f43debd2c5c7" gracePeriod=30 Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.701298 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerName="sg-core" containerID="cri-o://ce107b52c8f3445b9ec76a561859b862dc62dc9976d0d2ac59adfc9855cf6bd0" gracePeriod=30 Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.701330 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerName="ceilometer-notification-agent" containerID="cri-o://06ac8abe3739afaf69ebc58f3baaf3e27bfecb005ce000a65602459c58cfcb6e" gracePeriod=30 Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.705233 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-64b8m" event={"ID":"c88334ec-64f6-41ba-aee5-d5323e8c0c25","Type":"ContainerStarted","Data":"956c5d87f252b7fd42789858690e26fc0ca0b5a7be8c4cc152d63b1bddc300e7"} Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.708505 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"30bd9d4f-e84f-4320-9057-80d3d53f7ebb","Type":"ContainerDied","Data":"00c41dbe58ad3dc460e41a4f8f86809ef9204f330e62756cf3eed317cf475042"} Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.708544 4932 scope.go:117] "RemoveContainer" containerID="ef6f599a3d418b9b543c98edcdf9f0f0c968498f5968cc9b3a5e3260ba0ced73" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.708664 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:55:02 crc kubenswrapper[4932]: W0218 19:55:02.712966 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7feb603_1c6f_423f_979e_840070052a6f.slice/crio-08aaa0a6f7ecaed6f00a69690e202523e24641b077b47ae363c813b7af5b54f3 WatchSource:0}: Error finding container 08aaa0a6f7ecaed6f00a69690e202523e24641b077b47ae363c813b7af5b54f3: Status 404 returned error can't find the container with id 08aaa0a6f7ecaed6f00a69690e202523e24641b077b47ae363c813b7af5b54f3 Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.716932 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"51bb24d5-d8d7-4bbb-a236-4967f9f7ece5","Type":"ContainerStarted","Data":"eced2922a93e5d472a9c76467bf01b45bd012e755d967ed8252968ef17137a74"} Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.729572 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.540934203 podStartE2EDuration="12.729557008s" podCreationTimestamp="2026-02-18 19:54:50 +0000 UTC" firstStartedPulling="2026-02-18 19:54:51.63064189 +0000 UTC m=+1255.212596735" lastFinishedPulling="2026-02-18 19:55:01.819264695 +0000 UTC m=+1265.401219540" observedRunningTime="2026-02-18 19:55:02.727958498 +0000 UTC m=+1266.309913353" watchObservedRunningTime="2026-02-18 19:55:02.729557008 +0000 UTC m=+1266.311511853" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.756534 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.098106433 podStartE2EDuration="16.756516246s" podCreationTimestamp="2026-02-18 19:54:46 +0000 UTC" firstStartedPulling="2026-02-18 19:54:47.160792371 +0000 UTC m=+1250.742747226" lastFinishedPulling="2026-02-18 19:55:01.819202194 +0000 UTC m=+1265.401157039" observedRunningTime="2026-02-18 19:55:02.749845341 +0000 UTC m=+1266.331800196" watchObservedRunningTime="2026-02-18 19:55:02.756516246 +0000 UTC m=+1266.338471091" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.895433 4932 scope.go:117] "RemoveContainer" containerID="b586d7053bf31a7678ef91de08a0a0dd40541c6b68d24d82d104e9ca9533195b" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.913558 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.921386 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.937316 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:55:02 crc kubenswrapper[4932]: E0218 19:55:02.937696 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30bd9d4f-e84f-4320-9057-80d3d53f7ebb" containerName="cinder-api" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.937712 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="30bd9d4f-e84f-4320-9057-80d3d53f7ebb" containerName="cinder-api" Feb 18 19:55:02 crc kubenswrapper[4932]: E0218 19:55:02.937737 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30bd9d4f-e84f-4320-9057-80d3d53f7ebb" containerName="cinder-api-log" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.937744 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="30bd9d4f-e84f-4320-9057-80d3d53f7ebb" containerName="cinder-api-log" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.937909 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="30bd9d4f-e84f-4320-9057-80d3d53f7ebb" containerName="cinder-api-log" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.937936 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="30bd9d4f-e84f-4320-9057-80d3d53f7ebb" containerName="cinder-api" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.938969 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.940915 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.941071 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.941185 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 18 19:55:02 crc kubenswrapper[4932]: I0218 19:55:02.956904 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.048482 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/47232719-278b-4937-b20a-df608aa754ff-etc-machine-id\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.048567 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.048626 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47232719-278b-4937-b20a-df608aa754ff-logs\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.048752 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szp8d\" (UniqueName: \"kubernetes.io/projected/47232719-278b-4937-b20a-df608aa754ff-kube-api-access-szp8d\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.048819 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-scripts\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.048894 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.048946 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-public-tls-certs\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.049131 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-config-data\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.049282 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-config-data-custom\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.151685 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/47232719-278b-4937-b20a-df608aa754ff-etc-machine-id\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.151755 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.151798 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47232719-278b-4937-b20a-df608aa754ff-logs\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.151839 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szp8d\" (UniqueName: \"kubernetes.io/projected/47232719-278b-4937-b20a-df608aa754ff-kube-api-access-szp8d\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.151895 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-scripts\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.151921 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.151949 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-public-tls-certs\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.151996 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-config-data\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.152037 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-config-data-custom\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.153114 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/47232719-278b-4937-b20a-df608aa754ff-etc-machine-id\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.155954 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47232719-278b-4937-b20a-df608aa754ff-logs\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.158316 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-config-data-custom\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.158918 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-scripts\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.159236 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.160231 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.161620 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-config-data\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.164436 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47232719-278b-4937-b20a-df608aa754ff-public-tls-certs\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.169499 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szp8d\" (UniqueName: \"kubernetes.io/projected/47232719-278b-4937-b20a-df608aa754ff-kube-api-access-szp8d\") pod \"cinder-api-0\" (UID: \"47232719-278b-4937-b20a-df608aa754ff\") " pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.189510 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30bd9d4f-e84f-4320-9057-80d3d53f7ebb" path="/var/lib/kubelet/pods/30bd9d4f-e84f-4320-9057-80d3d53f7ebb/volumes" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.190382 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bd90883-79db-4903-87ab-828b9608f9fa" path="/var/lib/kubelet/pods/5bd90883-79db-4903-87ab-828b9608f9fa/volumes" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.280468 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.733018 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.739454 4932 generic.go:334] "Generic (PLEG): container finished" podID="fb1c0405-2770-4a03-ba51-c78005d57ad9" containerID="8814276ab26396b2ae50e791faa4b0371fd71f6cc12e32306c2bd981b3b56f5f" exitCode=0 Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.739637 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5966846f96-hbrsw" event={"ID":"fb1c0405-2770-4a03-ba51-c78005d57ad9","Type":"ContainerDied","Data":"8814276ab26396b2ae50e791faa4b0371fd71f6cc12e32306c2bd981b3b56f5f"} Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.739840 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5966846f96-hbrsw" event={"ID":"fb1c0405-2770-4a03-ba51-c78005d57ad9","Type":"ContainerDied","Data":"b0f1a3c159b5f59fb68d5b1503c08f8c96ed4a7d57c2077fe1e9116b9b2fbf3b"} Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.739932 4932 scope.go:117] "RemoveContainer" containerID="44de70ea8243ac6a26a85999f9986d4709163f4632edfdc74e832972fac2ff09" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.740103 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5966846f96-hbrsw" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.744149 4932 generic.go:334] "Generic (PLEG): container finished" podID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerID="0f0525a788d09ab7dbc5b7179e97321ba422fbd170f5197c2447f43debd2c5c7" exitCode=0 Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.744493 4932 generic.go:334] "Generic (PLEG): container finished" podID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerID="ce107b52c8f3445b9ec76a561859b862dc62dc9976d0d2ac59adfc9855cf6bd0" exitCode=2 Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.744579 4932 generic.go:334] "Generic (PLEG): container finished" podID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerID="06ac8abe3739afaf69ebc58f3baaf3e27bfecb005ce000a65602459c58cfcb6e" exitCode=0 Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.744639 4932 generic.go:334] "Generic (PLEG): container finished" podID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerID="3d2064737241a9f7bf6098cc357b389e019add37b83420b1cfe158e700514b8a" exitCode=0 Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.744718 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42f96153-201b-4efb-952d-ec27dcbd8c0c","Type":"ContainerDied","Data":"0f0525a788d09ab7dbc5b7179e97321ba422fbd170f5197c2447f43debd2c5c7"} Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.744785 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42f96153-201b-4efb-952d-ec27dcbd8c0c","Type":"ContainerDied","Data":"ce107b52c8f3445b9ec76a561859b862dc62dc9976d0d2ac59adfc9855cf6bd0"} Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.744848 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42f96153-201b-4efb-952d-ec27dcbd8c0c","Type":"ContainerDied","Data":"06ac8abe3739afaf69ebc58f3baaf3e27bfecb005ce000a65602459c58cfcb6e"} Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.744910 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42f96153-201b-4efb-952d-ec27dcbd8c0c","Type":"ContainerDied","Data":"3d2064737241a9f7bf6098cc357b389e019add37b83420b1cfe158e700514b8a"} Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.757990 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"c7feb603-1c6f-423f-979e-840070052a6f","Type":"ContainerStarted","Data":"21a693b21bd92f9ee466a319d872d778fe453f52a58b874af7c6007ff9102392"} Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.758030 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"c7feb603-1c6f-423f-979e-840070052a6f","Type":"ContainerStarted","Data":"08aaa0a6f7ecaed6f00a69690e202523e24641b077b47ae363c813b7af5b54f3"} Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.779787 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=2.7797712089999997 podStartE2EDuration="2.779771209s" podCreationTimestamp="2026-02-18 19:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:55:03.77537176 +0000 UTC m=+1267.357326605" watchObservedRunningTime="2026-02-18 19:55:03.779771209 +0000 UTC m=+1267.361726054" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.785110 4932 scope.go:117] "RemoveContainer" containerID="8814276ab26396b2ae50e791faa4b0371fd71f6cc12e32306c2bd981b3b56f5f" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.822759 4932 scope.go:117] "RemoveContainer" containerID="44de70ea8243ac6a26a85999f9986d4709163f4632edfdc74e832972fac2ff09" Feb 18 19:55:03 crc kubenswrapper[4932]: E0218 19:55:03.824306 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44de70ea8243ac6a26a85999f9986d4709163f4632edfdc74e832972fac2ff09\": container with ID starting with 44de70ea8243ac6a26a85999f9986d4709163f4632edfdc74e832972fac2ff09 not found: ID does not exist" containerID="44de70ea8243ac6a26a85999f9986d4709163f4632edfdc74e832972fac2ff09" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.824419 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44de70ea8243ac6a26a85999f9986d4709163f4632edfdc74e832972fac2ff09"} err="failed to get container status \"44de70ea8243ac6a26a85999f9986d4709163f4632edfdc74e832972fac2ff09\": rpc error: code = NotFound desc = could not find container \"44de70ea8243ac6a26a85999f9986d4709163f4632edfdc74e832972fac2ff09\": container with ID starting with 44de70ea8243ac6a26a85999f9986d4709163f4632edfdc74e832972fac2ff09 not found: ID does not exist" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.824516 4932 scope.go:117] "RemoveContainer" containerID="8814276ab26396b2ae50e791faa4b0371fd71f6cc12e32306c2bd981b3b56f5f" Feb 18 19:55:03 crc kubenswrapper[4932]: E0218 19:55:03.824945 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8814276ab26396b2ae50e791faa4b0371fd71f6cc12e32306c2bd981b3b56f5f\": container with ID starting with 8814276ab26396b2ae50e791faa4b0371fd71f6cc12e32306c2bd981b3b56f5f not found: ID does not exist" containerID="8814276ab26396b2ae50e791faa4b0371fd71f6cc12e32306c2bd981b3b56f5f" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.824993 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8814276ab26396b2ae50e791faa4b0371fd71f6cc12e32306c2bd981b3b56f5f"} err="failed to get container status \"8814276ab26396b2ae50e791faa4b0371fd71f6cc12e32306c2bd981b3b56f5f\": rpc error: code = NotFound desc = could not find container \"8814276ab26396b2ae50e791faa4b0371fd71f6cc12e32306c2bd981b3b56f5f\": container with ID starting with 8814276ab26396b2ae50e791faa4b0371fd71f6cc12e32306c2bd981b3b56f5f not found: ID does not exist" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.831499 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.866107 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d54tn\" (UniqueName: \"kubernetes.io/projected/fb1c0405-2770-4a03-ba51-c78005d57ad9-kube-api-access-d54tn\") pod \"fb1c0405-2770-4a03-ba51-c78005d57ad9\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.866590 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-ovndb-tls-certs\") pod \"fb1c0405-2770-4a03-ba51-c78005d57ad9\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.866729 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-httpd-config\") pod \"fb1c0405-2770-4a03-ba51-c78005d57ad9\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.866803 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-config\") pod \"fb1c0405-2770-4a03-ba51-c78005d57ad9\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.866952 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-combined-ca-bundle\") pod \"fb1c0405-2770-4a03-ba51-c78005d57ad9\" (UID: \"fb1c0405-2770-4a03-ba51-c78005d57ad9\") " Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.874039 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb1c0405-2770-4a03-ba51-c78005d57ad9-kube-api-access-d54tn" (OuterVolumeSpecName: "kube-api-access-d54tn") pod "fb1c0405-2770-4a03-ba51-c78005d57ad9" (UID: "fb1c0405-2770-4a03-ba51-c78005d57ad9"). InnerVolumeSpecName "kube-api-access-d54tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.874857 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "fb1c0405-2770-4a03-ba51-c78005d57ad9" (UID: "fb1c0405-2770-4a03-ba51-c78005d57ad9"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.940239 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb1c0405-2770-4a03-ba51-c78005d57ad9" (UID: "fb1c0405-2770-4a03-ba51-c78005d57ad9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.959100 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "fb1c0405-2770-4a03-ba51-c78005d57ad9" (UID: "fb1c0405-2770-4a03-ba51-c78005d57ad9"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.969297 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.969318 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d54tn\" (UniqueName: \"kubernetes.io/projected/fb1c0405-2770-4a03-ba51-c78005d57ad9-kube-api-access-d54tn\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.969331 4932 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.969340 4932 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:03 crc kubenswrapper[4932]: I0218 19:55:03.983782 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-config" (OuterVolumeSpecName: "config") pod "fb1c0405-2770-4a03-ba51-c78005d57ad9" (UID: "fb1c0405-2770-4a03-ba51-c78005d57ad9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.079275 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5966846f96-hbrsw"] Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.084667 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb1c0405-2770-4a03-ba51-c78005d57ad9-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.089390 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5966846f96-hbrsw"] Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.168559 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.287678 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42f96153-201b-4efb-952d-ec27dcbd8c0c-log-httpd\") pod \"42f96153-201b-4efb-952d-ec27dcbd8c0c\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.287735 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tm9j\" (UniqueName: \"kubernetes.io/projected/42f96153-201b-4efb-952d-ec27dcbd8c0c-kube-api-access-8tm9j\") pod \"42f96153-201b-4efb-952d-ec27dcbd8c0c\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.287809 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-combined-ca-bundle\") pod \"42f96153-201b-4efb-952d-ec27dcbd8c0c\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.287840 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42f96153-201b-4efb-952d-ec27dcbd8c0c-run-httpd\") pod \"42f96153-201b-4efb-952d-ec27dcbd8c0c\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.287909 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-sg-core-conf-yaml\") pod \"42f96153-201b-4efb-952d-ec27dcbd8c0c\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.287955 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-config-data\") pod \"42f96153-201b-4efb-952d-ec27dcbd8c0c\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.288016 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-scripts\") pod \"42f96153-201b-4efb-952d-ec27dcbd8c0c\" (UID: \"42f96153-201b-4efb-952d-ec27dcbd8c0c\") " Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.288972 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42f96153-201b-4efb-952d-ec27dcbd8c0c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "42f96153-201b-4efb-952d-ec27dcbd8c0c" (UID: "42f96153-201b-4efb-952d-ec27dcbd8c0c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.289399 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42f96153-201b-4efb-952d-ec27dcbd8c0c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "42f96153-201b-4efb-952d-ec27dcbd8c0c" (UID: "42f96153-201b-4efb-952d-ec27dcbd8c0c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.294121 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42f96153-201b-4efb-952d-ec27dcbd8c0c-kube-api-access-8tm9j" (OuterVolumeSpecName: "kube-api-access-8tm9j") pod "42f96153-201b-4efb-952d-ec27dcbd8c0c" (UID: "42f96153-201b-4efb-952d-ec27dcbd8c0c"). InnerVolumeSpecName "kube-api-access-8tm9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.297290 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-scripts" (OuterVolumeSpecName: "scripts") pod "42f96153-201b-4efb-952d-ec27dcbd8c0c" (UID: "42f96153-201b-4efb-952d-ec27dcbd8c0c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.315533 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "42f96153-201b-4efb-952d-ec27dcbd8c0c" (UID: "42f96153-201b-4efb-952d-ec27dcbd8c0c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.372822 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42f96153-201b-4efb-952d-ec27dcbd8c0c" (UID: "42f96153-201b-4efb-952d-ec27dcbd8c0c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.395123 4932 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.395158 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.395181 4932 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42f96153-201b-4efb-952d-ec27dcbd8c0c-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.395191 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tm9j\" (UniqueName: \"kubernetes.io/projected/42f96153-201b-4efb-952d-ec27dcbd8c0c-kube-api-access-8tm9j\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.395202 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.395210 4932 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42f96153-201b-4efb-952d-ec27dcbd8c0c-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.411085 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-config-data" (OuterVolumeSpecName: "config-data") pod "42f96153-201b-4efb-952d-ec27dcbd8c0c" (UID: "42f96153-201b-4efb-952d-ec27dcbd8c0c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.496339 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42f96153-201b-4efb-952d-ec27dcbd8c0c-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.791563 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"47232719-278b-4937-b20a-df608aa754ff","Type":"ContainerStarted","Data":"55666644405d724e92eb66ca6ff0a5a0536ce22f739acb31579e42ce03c8c6dd"} Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.791924 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"47232719-278b-4937-b20a-df608aa754ff","Type":"ContainerStarted","Data":"11091381623a887fe17ca074aad9c76b9bf435944dbf795a455c02b8aed96137"} Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.820167 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.821207 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42f96153-201b-4efb-952d-ec27dcbd8c0c","Type":"ContainerDied","Data":"bb80acf58868f86869a2edd8ebddc1372e30bf85bb8346fbc78e3b03f8adb9d4"} Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.821265 4932 scope.go:117] "RemoveContainer" containerID="0f0525a788d09ab7dbc5b7179e97321ba422fbd170f5197c2447f43debd2c5c7" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.904499 4932 scope.go:117] "RemoveContainer" containerID="ce107b52c8f3445b9ec76a561859b862dc62dc9976d0d2ac59adfc9855cf6bd0" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.905757 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.921109 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.934886 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:04 crc kubenswrapper[4932]: E0218 19:55:04.935398 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb1c0405-2770-4a03-ba51-c78005d57ad9" containerName="neutron-api" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.935420 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb1c0405-2770-4a03-ba51-c78005d57ad9" containerName="neutron-api" Feb 18 19:55:04 crc kubenswrapper[4932]: E0218 19:55:04.935434 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerName="sg-core" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.935445 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerName="sg-core" Feb 18 19:55:04 crc kubenswrapper[4932]: E0218 19:55:04.935471 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerName="ceilometer-notification-agent" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.935477 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerName="ceilometer-notification-agent" Feb 18 19:55:04 crc kubenswrapper[4932]: E0218 19:55:04.935487 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerName="proxy-httpd" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.935493 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerName="proxy-httpd" Feb 18 19:55:04 crc kubenswrapper[4932]: E0218 19:55:04.935503 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb1c0405-2770-4a03-ba51-c78005d57ad9" containerName="neutron-httpd" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.935509 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb1c0405-2770-4a03-ba51-c78005d57ad9" containerName="neutron-httpd" Feb 18 19:55:04 crc kubenswrapper[4932]: E0218 19:55:04.935530 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerName="ceilometer-central-agent" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.935536 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerName="ceilometer-central-agent" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.935733 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerName="ceilometer-central-agent" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.935745 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerName="proxy-httpd" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.935755 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb1c0405-2770-4a03-ba51-c78005d57ad9" containerName="neutron-api" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.935856 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb1c0405-2770-4a03-ba51-c78005d57ad9" containerName="neutron-httpd" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.935872 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerName="sg-core" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.935882 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="42f96153-201b-4efb-952d-ec27dcbd8c0c" containerName="ceilometer-notification-agent" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.937893 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.940582 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.940867 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.957277 4932 scope.go:117] "RemoveContainer" containerID="06ac8abe3739afaf69ebc58f3baaf3e27bfecb005ce000a65602459c58cfcb6e" Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.960418 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:04 crc kubenswrapper[4932]: I0218 19:55:04.986066 4932 scope.go:117] "RemoveContainer" containerID="3d2064737241a9f7bf6098cc357b389e019add37b83420b1cfe158e700514b8a" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.134808 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7z7x\" (UniqueName: \"kubernetes.io/projected/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-kube-api-access-k7z7x\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.134902 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.134957 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-run-httpd\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.134975 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-scripts\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.135019 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.135049 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-config-data\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.135064 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-log-httpd\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.201446 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42f96153-201b-4efb-952d-ec27dcbd8c0c" path="/var/lib/kubelet/pods/42f96153-201b-4efb-952d-ec27dcbd8c0c/volumes" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.202131 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb1c0405-2770-4a03-ba51-c78005d57ad9" path="/var/lib/kubelet/pods/fb1c0405-2770-4a03-ba51-c78005d57ad9/volumes" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.236566 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7z7x\" (UniqueName: \"kubernetes.io/projected/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-kube-api-access-k7z7x\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.236662 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.236703 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-run-httpd\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.236722 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-scripts\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.236751 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.236777 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-config-data\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.236791 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-log-httpd\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.237232 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-log-httpd\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.237584 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-run-httpd\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.245244 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-scripts\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.252028 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.252738 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.263217 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-config-data\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.263783 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7z7x\" (UniqueName: \"kubernetes.io/projected/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-kube-api-access-k7z7x\") pod \"ceilometer-0\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.268606 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.776148 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.832858 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"47232719-278b-4937-b20a-df608aa754ff","Type":"ContainerStarted","Data":"3e7e41a4426e9ada8ce8a7dd8e1993272ee9ec065c595c55451ded476c951f03"} Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.833038 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.838430 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b","Type":"ContainerStarted","Data":"f4124030be51a32309b149f7b80243f14d8defbe91c31e12165acaf4898b489f"} Feb 18 19:55:05 crc kubenswrapper[4932]: I0218 19:55:05.853949 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.853928692 podStartE2EDuration="3.853928692s" podCreationTimestamp="2026-02-18 19:55:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:55:05.848621751 +0000 UTC m=+1269.430576596" watchObservedRunningTime="2026-02-18 19:55:05.853928692 +0000 UTC m=+1269.435883537" Feb 18 19:55:06 crc kubenswrapper[4932]: I0218 19:55:06.074304 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:06 crc kubenswrapper[4932]: I0218 19:55:06.858515 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b","Type":"ContainerStarted","Data":"2a8231a4749b7c28294044affca5243982acbc40847aed0b7aabf1f4dcec52be"} Feb 18 19:55:06 crc kubenswrapper[4932]: I0218 19:55:06.858900 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b","Type":"ContainerStarted","Data":"169807715b64b948a02827aa86071857ef5eef4e02b77baa6a3f7849b7ff2633"} Feb 18 19:55:07 crc kubenswrapper[4932]: I0218 19:55:07.227261 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 18 19:55:07 crc kubenswrapper[4932]: I0218 19:55:07.740311 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:55:07 crc kubenswrapper[4932]: I0218 19:55:07.740530 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 18 19:55:07 crc kubenswrapper[4932]: I0218 19:55:07.741508 4932 scope.go:117] "RemoveContainer" containerID="ed7bb6ecb9363ec214feff66ba52e868e0754894a65c87ef835d3f25e4d547c0" Feb 18 19:55:07 crc kubenswrapper[4932]: I0218 19:55:07.875328 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b","Type":"ContainerStarted","Data":"5543bc0c607aa54b7d22bbd948e592ac89d47c26384cf6a11fc3e926ffc9bccb"} Feb 18 19:55:08 crc kubenswrapper[4932]: I0218 19:55:08.889635 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0882c686-1b07-4ac7-a6be-148eff7faa19","Type":"ContainerStarted","Data":"fe208f8627cc247d1776f2857d9d80c1aad527a9990296862d004284122e81ef"} Feb 18 19:55:12 crc kubenswrapper[4932]: I0218 19:55:12.227268 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 18 19:55:12 crc kubenswrapper[4932]: I0218 19:55:12.257930 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 18 19:55:12 crc kubenswrapper[4932]: I0218 19:55:12.960720 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 18 19:55:16 crc kubenswrapper[4932]: I0218 19:55:16.463834 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 18 19:55:16 crc kubenswrapper[4932]: I0218 19:55:16.963458 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-64b8m" event={"ID":"c88334ec-64f6-41ba-aee5-d5323e8c0c25","Type":"ContainerStarted","Data":"4e7866a2ddd0a42f76d440fa6b1c16f63d3f4f13968f3f538f0dc810522b826b"} Feb 18 19:55:16 crc kubenswrapper[4932]: I0218 19:55:16.965726 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b","Type":"ContainerStarted","Data":"4eac274794eddf289d34407897e2996096b124daf0d65b5d1aef19150d4661b0"} Feb 18 19:55:16 crc kubenswrapper[4932]: I0218 19:55:16.965872 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerName="ceilometer-central-agent" containerID="cri-o://169807715b64b948a02827aa86071857ef5eef4e02b77baa6a3f7849b7ff2633" gracePeriod=30 Feb 18 19:55:16 crc kubenswrapper[4932]: I0218 19:55:16.965916 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:55:16 crc kubenswrapper[4932]: I0218 19:55:16.965957 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerName="proxy-httpd" containerID="cri-o://4eac274794eddf289d34407897e2996096b124daf0d65b5d1aef19150d4661b0" gracePeriod=30 Feb 18 19:55:16 crc kubenswrapper[4932]: I0218 19:55:16.965974 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerName="ceilometer-notification-agent" containerID="cri-o://2a8231a4749b7c28294044affca5243982acbc40847aed0b7aabf1f4dcec52be" gracePeriod=30 Feb 18 19:55:16 crc kubenswrapper[4932]: I0218 19:55:16.965991 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerName="sg-core" containerID="cri-o://5543bc0c607aa54b7d22bbd948e592ac89d47c26384cf6a11fc3e926ffc9bccb" gracePeriod=30 Feb 18 19:55:16 crc kubenswrapper[4932]: I0218 19:55:16.992065 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-64b8m" podStartSLOduration=5.268223185 podStartE2EDuration="18.992045882s" podCreationTimestamp="2026-02-18 19:54:58 +0000 UTC" firstStartedPulling="2026-02-18 19:55:02.32112472 +0000 UTC m=+1265.903079565" lastFinishedPulling="2026-02-18 19:55:16.044947407 +0000 UTC m=+1279.626902262" observedRunningTime="2026-02-18 19:55:16.982811753 +0000 UTC m=+1280.564766598" watchObservedRunningTime="2026-02-18 19:55:16.992045882 +0000 UTC m=+1280.574000737" Feb 18 19:55:17 crc kubenswrapper[4932]: I0218 19:55:17.021770 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.777816193 podStartE2EDuration="13.021744829s" podCreationTimestamp="2026-02-18 19:55:04 +0000 UTC" firstStartedPulling="2026-02-18 19:55:05.797445202 +0000 UTC m=+1269.379400047" lastFinishedPulling="2026-02-18 19:55:16.041373838 +0000 UTC m=+1279.623328683" observedRunningTime="2026-02-18 19:55:17.015909894 +0000 UTC m=+1280.597864749" watchObservedRunningTime="2026-02-18 19:55:17.021744829 +0000 UTC m=+1280.603699674" Feb 18 19:55:17 crc kubenswrapper[4932]: E0218 19:55:17.486505 4932 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d92a28e_33fd_49cd_ba7e_1b12f1b4628b.slice/crio-conmon-169807715b64b948a02827aa86071857ef5eef4e02b77baa6a3f7849b7ff2633.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d92a28e_33fd_49cd_ba7e_1b12f1b4628b.slice/crio-169807715b64b948a02827aa86071857ef5eef4e02b77baa6a3f7849b7ff2633.scope\": RecentStats: unable to find data in memory cache]" Feb 18 19:55:17 crc kubenswrapper[4932]: I0218 19:55:17.740703 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:55:17 crc kubenswrapper[4932]: I0218 19:55:17.773702 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 18 19:55:18 crc kubenswrapper[4932]: I0218 19:55:18.003496 4932 generic.go:334] "Generic (PLEG): container finished" podID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerID="4eac274794eddf289d34407897e2996096b124daf0d65b5d1aef19150d4661b0" exitCode=0 Feb 18 19:55:18 crc kubenswrapper[4932]: I0218 19:55:18.003540 4932 generic.go:334] "Generic (PLEG): container finished" podID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerID="5543bc0c607aa54b7d22bbd948e592ac89d47c26384cf6a11fc3e926ffc9bccb" exitCode=2 Feb 18 19:55:18 crc kubenswrapper[4932]: I0218 19:55:18.003547 4932 generic.go:334] "Generic (PLEG): container finished" podID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerID="169807715b64b948a02827aa86071857ef5eef4e02b77baa6a3f7849b7ff2633" exitCode=0 Feb 18 19:55:18 crc kubenswrapper[4932]: I0218 19:55:18.003982 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b","Type":"ContainerDied","Data":"4eac274794eddf289d34407897e2996096b124daf0d65b5d1aef19150d4661b0"} Feb 18 19:55:18 crc kubenswrapper[4932]: I0218 19:55:18.004089 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b","Type":"ContainerDied","Data":"5543bc0c607aa54b7d22bbd948e592ac89d47c26384cf6a11fc3e926ffc9bccb"} Feb 18 19:55:18 crc kubenswrapper[4932]: I0218 19:55:18.004156 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b","Type":"ContainerDied","Data":"169807715b64b948a02827aa86071857ef5eef4e02b77baa6a3f7849b7ff2633"} Feb 18 19:55:18 crc kubenswrapper[4932]: I0218 19:55:18.005247 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 18 19:55:18 crc kubenswrapper[4932]: I0218 19:55:18.059402 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 18 19:55:18 crc kubenswrapper[4932]: I0218 19:55:18.128249 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.025402 4932 generic.go:334] "Generic (PLEG): container finished" podID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerID="2a8231a4749b7c28294044affca5243982acbc40847aed0b7aabf1f4dcec52be" exitCode=0 Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.025466 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b","Type":"ContainerDied","Data":"2a8231a4749b7c28294044affca5243982acbc40847aed0b7aabf1f4dcec52be"} Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.025783 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" containerName="watcher-decision-engine" containerID="cri-o://fe208f8627cc247d1776f2857d9d80c1aad527a9990296862d004284122e81ef" gracePeriod=30 Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.130633 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.218877 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-log-httpd\") pod \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.218934 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7z7x\" (UniqueName: \"kubernetes.io/projected/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-kube-api-access-k7z7x\") pod \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.218999 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-sg-core-conf-yaml\") pod \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.219122 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-run-httpd\") pod \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.219156 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-combined-ca-bundle\") pod \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.219295 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-scripts\") pod \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.219317 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-config-data\") pod \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\" (UID: \"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b\") " Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.219660 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" (UID: "6d92a28e-33fd-49cd-ba7e-1b12f1b4628b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.220283 4932 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.220849 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" (UID: "6d92a28e-33fd-49cd-ba7e-1b12f1b4628b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.230421 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-scripts" (OuterVolumeSpecName: "scripts") pod "6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" (UID: "6d92a28e-33fd-49cd-ba7e-1b12f1b4628b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.230542 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-kube-api-access-k7z7x" (OuterVolumeSpecName: "kube-api-access-k7z7x") pod "6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" (UID: "6d92a28e-33fd-49cd-ba7e-1b12f1b4628b"). InnerVolumeSpecName "kube-api-access-k7z7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.263642 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" (UID: "6d92a28e-33fd-49cd-ba7e-1b12f1b4628b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.301626 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" (UID: "6d92a28e-33fd-49cd-ba7e-1b12f1b4628b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.322103 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.322129 4932 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.322140 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7z7x\" (UniqueName: \"kubernetes.io/projected/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-kube-api-access-k7z7x\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.322150 4932 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.322158 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.331267 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-config-data" (OuterVolumeSpecName: "config-data") pod "6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" (UID: "6d92a28e-33fd-49cd-ba7e-1b12f1b4628b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:20 crc kubenswrapper[4932]: I0218 19:55:20.423935 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.040969 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d92a28e-33fd-49cd-ba7e-1b12f1b4628b","Type":"ContainerDied","Data":"f4124030be51a32309b149f7b80243f14d8defbe91c31e12165acaf4898b489f"} Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.041247 4932 scope.go:117] "RemoveContainer" containerID="4eac274794eddf289d34407897e2996096b124daf0d65b5d1aef19150d4661b0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.041052 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.065391 4932 scope.go:117] "RemoveContainer" containerID="5543bc0c607aa54b7d22bbd948e592ac89d47c26384cf6a11fc3e926ffc9bccb" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.086044 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.109584 4932 scope.go:117] "RemoveContainer" containerID="2a8231a4749b7c28294044affca5243982acbc40847aed0b7aabf1f4dcec52be" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.111348 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.135922 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:21 crc kubenswrapper[4932]: E0218 19:55:21.136469 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerName="ceilometer-notification-agent" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.136494 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerName="ceilometer-notification-agent" Feb 18 19:55:21 crc kubenswrapper[4932]: E0218 19:55:21.136514 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerName="proxy-httpd" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.136523 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerName="proxy-httpd" Feb 18 19:55:21 crc kubenswrapper[4932]: E0218 19:55:21.136545 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerName="ceilometer-central-agent" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.136554 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerName="ceilometer-central-agent" Feb 18 19:55:21 crc kubenswrapper[4932]: E0218 19:55:21.136583 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerName="sg-core" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.136591 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerName="sg-core" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.136824 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerName="proxy-httpd" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.136848 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerName="ceilometer-central-agent" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.136867 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerName="sg-core" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.136887 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" containerName="ceilometer-notification-agent" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.138977 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.148129 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.152912 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.153122 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.159455 4932 scope.go:117] "RemoveContainer" containerID="169807715b64b948a02827aa86071857ef5eef4e02b77baa6a3f7849b7ff2633" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.195276 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d92a28e-33fd-49cd-ba7e-1b12f1b4628b" path="/var/lib/kubelet/pods/6d92a28e-33fd-49cd-ba7e-1b12f1b4628b/volumes" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.249729 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-config-data\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.249808 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-log-httpd\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.249834 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.250661 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-scripts\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.250694 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-run-httpd\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.250775 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.250796 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlbvv\" (UniqueName: \"kubernetes.io/projected/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-kube-api-access-dlbvv\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.352010 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.352066 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlbvv\" (UniqueName: \"kubernetes.io/projected/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-kube-api-access-dlbvv\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.352097 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-config-data\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.352129 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-log-httpd\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.352150 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.352253 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-scripts\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.352285 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-run-httpd\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.352867 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-run-httpd\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.353295 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-log-httpd\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.359374 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.359554 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.384356 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-scripts\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.391582 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-config-data\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.397288 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlbvv\" (UniqueName: \"kubernetes.io/projected/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-kube-api-access-dlbvv\") pod \"ceilometer-0\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.462737 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.927040 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:21 crc kubenswrapper[4932]: W0218 19:55:21.929192 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b53bc70_03d1_4b04_8b5e_bf135aed16bc.slice/crio-645c7a2022539be3402561398dbaa367b877bd373e87043a2a883b26e04638ba WatchSource:0}: Error finding container 645c7a2022539be3402561398dbaa367b877bd373e87043a2a883b26e04638ba: Status 404 returned error can't find the container with id 645c7a2022539be3402561398dbaa367b877bd373e87043a2a883b26e04638ba Feb 18 19:55:21 crc kubenswrapper[4932]: I0218 19:55:21.932284 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 19:55:22 crc kubenswrapper[4932]: I0218 19:55:22.058766 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b53bc70-03d1-4b04-8b5e-bf135aed16bc","Type":"ContainerStarted","Data":"645c7a2022539be3402561398dbaa367b877bd373e87043a2a883b26e04638ba"} Feb 18 19:55:22 crc kubenswrapper[4932]: I0218 19:55:22.740949 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:55:22 crc kubenswrapper[4932]: I0218 19:55:22.786794 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-custom-prometheus-ca\") pod \"0882c686-1b07-4ac7-a6be-148eff7faa19\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " Feb 18 19:55:22 crc kubenswrapper[4932]: I0218 19:55:22.786889 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-config-data\") pod \"0882c686-1b07-4ac7-a6be-148eff7faa19\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " Feb 18 19:55:22 crc kubenswrapper[4932]: I0218 19:55:22.786974 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0882c686-1b07-4ac7-a6be-148eff7faa19-logs\") pod \"0882c686-1b07-4ac7-a6be-148eff7faa19\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " Feb 18 19:55:22 crc kubenswrapper[4932]: I0218 19:55:22.786994 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tb6k\" (UniqueName: \"kubernetes.io/projected/0882c686-1b07-4ac7-a6be-148eff7faa19-kube-api-access-9tb6k\") pod \"0882c686-1b07-4ac7-a6be-148eff7faa19\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " Feb 18 19:55:22 crc kubenswrapper[4932]: I0218 19:55:22.787084 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-combined-ca-bundle\") pod \"0882c686-1b07-4ac7-a6be-148eff7faa19\" (UID: \"0882c686-1b07-4ac7-a6be-148eff7faa19\") " Feb 18 19:55:22 crc kubenswrapper[4932]: I0218 19:55:22.788263 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0882c686-1b07-4ac7-a6be-148eff7faa19-logs" (OuterVolumeSpecName: "logs") pod "0882c686-1b07-4ac7-a6be-148eff7faa19" (UID: "0882c686-1b07-4ac7-a6be-148eff7faa19"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:55:22 crc kubenswrapper[4932]: I0218 19:55:22.797353 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0882c686-1b07-4ac7-a6be-148eff7faa19-kube-api-access-9tb6k" (OuterVolumeSpecName: "kube-api-access-9tb6k") pod "0882c686-1b07-4ac7-a6be-148eff7faa19" (UID: "0882c686-1b07-4ac7-a6be-148eff7faa19"). InnerVolumeSpecName "kube-api-access-9tb6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:22 crc kubenswrapper[4932]: I0218 19:55:22.823722 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "0882c686-1b07-4ac7-a6be-148eff7faa19" (UID: "0882c686-1b07-4ac7-a6be-148eff7faa19"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:22 crc kubenswrapper[4932]: I0218 19:55:22.841357 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0882c686-1b07-4ac7-a6be-148eff7faa19" (UID: "0882c686-1b07-4ac7-a6be-148eff7faa19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:22 crc kubenswrapper[4932]: I0218 19:55:22.889309 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-config-data" (OuterVolumeSpecName: "config-data") pod "0882c686-1b07-4ac7-a6be-148eff7faa19" (UID: "0882c686-1b07-4ac7-a6be-148eff7faa19"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:22 crc kubenswrapper[4932]: I0218 19:55:22.890209 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:22 crc kubenswrapper[4932]: I0218 19:55:22.890239 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0882c686-1b07-4ac7-a6be-148eff7faa19-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:22 crc kubenswrapper[4932]: I0218 19:55:22.890271 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tb6k\" (UniqueName: \"kubernetes.io/projected/0882c686-1b07-4ac7-a6be-148eff7faa19-kube-api-access-9tb6k\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:22 crc kubenswrapper[4932]: I0218 19:55:22.890283 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:22 crc kubenswrapper[4932]: I0218 19:55:22.890292 4932 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0882c686-1b07-4ac7-a6be-148eff7faa19-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.069520 4932 generic.go:334] "Generic (PLEG): container finished" podID="0882c686-1b07-4ac7-a6be-148eff7faa19" containerID="fe208f8627cc247d1776f2857d9d80c1aad527a9990296862d004284122e81ef" exitCode=0 Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.069626 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.069643 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0882c686-1b07-4ac7-a6be-148eff7faa19","Type":"ContainerDied","Data":"fe208f8627cc247d1776f2857d9d80c1aad527a9990296862d004284122e81ef"} Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.070573 4932 scope.go:117] "RemoveContainer" containerID="fe208f8627cc247d1776f2857d9d80c1aad527a9990296862d004284122e81ef" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.070774 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0882c686-1b07-4ac7-a6be-148eff7faa19","Type":"ContainerDied","Data":"04f5dff2832c6635da78aa840490b39a4906ea50c8d89ba21f85a3c5474f7c9b"} Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.075928 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b53bc70-03d1-4b04-8b5e-bf135aed16bc","Type":"ContainerStarted","Data":"7acb15159d4595ba439a2fb0bc1f02945c077e1367284777296d41f6db4c2909"} Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.075974 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b53bc70-03d1-4b04-8b5e-bf135aed16bc","Type":"ContainerStarted","Data":"352d9d060068cebd5ee94ed059873c579e4c314ed02a4f51b120c4e46c462b6a"} Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.108307 4932 scope.go:117] "RemoveContainer" containerID="ed7bb6ecb9363ec214feff66ba52e868e0754894a65c87ef835d3f25e4d547c0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.124471 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.136059 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.145050 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:55:23 crc kubenswrapper[4932]: E0218 19:55:23.145525 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" containerName="watcher-decision-engine" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.145541 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" containerName="watcher-decision-engine" Feb 18 19:55:23 crc kubenswrapper[4932]: E0218 19:55:23.145552 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" containerName="watcher-decision-engine" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.145558 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" containerName="watcher-decision-engine" Feb 18 19:55:23 crc kubenswrapper[4932]: E0218 19:55:23.145589 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" containerName="watcher-decision-engine" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.145596 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" containerName="watcher-decision-engine" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.145757 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" containerName="watcher-decision-engine" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.145775 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" containerName="watcher-decision-engine" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.145785 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" containerName="watcher-decision-engine" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.152942 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.153054 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.156620 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.170513 4932 scope.go:117] "RemoveContainer" containerID="fe208f8627cc247d1776f2857d9d80c1aad527a9990296862d004284122e81ef" Feb 18 19:55:23 crc kubenswrapper[4932]: E0218 19:55:23.175302 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe208f8627cc247d1776f2857d9d80c1aad527a9990296862d004284122e81ef\": container with ID starting with fe208f8627cc247d1776f2857d9d80c1aad527a9990296862d004284122e81ef not found: ID does not exist" containerID="fe208f8627cc247d1776f2857d9d80c1aad527a9990296862d004284122e81ef" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.175344 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe208f8627cc247d1776f2857d9d80c1aad527a9990296862d004284122e81ef"} err="failed to get container status \"fe208f8627cc247d1776f2857d9d80c1aad527a9990296862d004284122e81ef\": rpc error: code = NotFound desc = could not find container \"fe208f8627cc247d1776f2857d9d80c1aad527a9990296862d004284122e81ef\": container with ID starting with fe208f8627cc247d1776f2857d9d80c1aad527a9990296862d004284122e81ef not found: ID does not exist" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.175372 4932 scope.go:117] "RemoveContainer" containerID="ed7bb6ecb9363ec214feff66ba52e868e0754894a65c87ef835d3f25e4d547c0" Feb 18 19:55:23 crc kubenswrapper[4932]: E0218 19:55:23.177558 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed7bb6ecb9363ec214feff66ba52e868e0754894a65c87ef835d3f25e4d547c0\": container with ID starting with ed7bb6ecb9363ec214feff66ba52e868e0754894a65c87ef835d3f25e4d547c0 not found: ID does not exist" containerID="ed7bb6ecb9363ec214feff66ba52e868e0754894a65c87ef835d3f25e4d547c0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.177619 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed7bb6ecb9363ec214feff66ba52e868e0754894a65c87ef835d3f25e4d547c0"} err="failed to get container status \"ed7bb6ecb9363ec214feff66ba52e868e0754894a65c87ef835d3f25e4d547c0\": rpc error: code = NotFound desc = could not find container \"ed7bb6ecb9363ec214feff66ba52e868e0754894a65c87ef835d3f25e4d547c0\": container with ID starting with ed7bb6ecb9363ec214feff66ba52e868e0754894a65c87ef835d3f25e4d547c0 not found: ID does not exist" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.196548 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2de1e0e-9137-47d7-ab62-ae47f646f26e-logs\") pod \"watcher-decision-engine-0\" (UID: \"b2de1e0e-9137-47d7-ab62-ae47f646f26e\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.196642 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68b59\" (UniqueName: \"kubernetes.io/projected/b2de1e0e-9137-47d7-ab62-ae47f646f26e-kube-api-access-68b59\") pod \"watcher-decision-engine-0\" (UID: \"b2de1e0e-9137-47d7-ab62-ae47f646f26e\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.196699 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2de1e0e-9137-47d7-ab62-ae47f646f26e-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"b2de1e0e-9137-47d7-ab62-ae47f646f26e\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.196725 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b2de1e0e-9137-47d7-ab62-ae47f646f26e-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"b2de1e0e-9137-47d7-ab62-ae47f646f26e\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.196743 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2de1e0e-9137-47d7-ab62-ae47f646f26e-config-data\") pod \"watcher-decision-engine-0\" (UID: \"b2de1e0e-9137-47d7-ab62-ae47f646f26e\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.198082 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" path="/var/lib/kubelet/pods/0882c686-1b07-4ac7-a6be-148eff7faa19/volumes" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.298992 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b2de1e0e-9137-47d7-ab62-ae47f646f26e-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"b2de1e0e-9137-47d7-ab62-ae47f646f26e\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.299051 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2de1e0e-9137-47d7-ab62-ae47f646f26e-config-data\") pod \"watcher-decision-engine-0\" (UID: \"b2de1e0e-9137-47d7-ab62-ae47f646f26e\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.299301 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2de1e0e-9137-47d7-ab62-ae47f646f26e-logs\") pod \"watcher-decision-engine-0\" (UID: \"b2de1e0e-9137-47d7-ab62-ae47f646f26e\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.299387 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68b59\" (UniqueName: \"kubernetes.io/projected/b2de1e0e-9137-47d7-ab62-ae47f646f26e-kube-api-access-68b59\") pod \"watcher-decision-engine-0\" (UID: \"b2de1e0e-9137-47d7-ab62-ae47f646f26e\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.299535 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2de1e0e-9137-47d7-ab62-ae47f646f26e-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"b2de1e0e-9137-47d7-ab62-ae47f646f26e\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.301794 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2de1e0e-9137-47d7-ab62-ae47f646f26e-logs\") pod \"watcher-decision-engine-0\" (UID: \"b2de1e0e-9137-47d7-ab62-ae47f646f26e\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.304911 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2de1e0e-9137-47d7-ab62-ae47f646f26e-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"b2de1e0e-9137-47d7-ab62-ae47f646f26e\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.305007 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2de1e0e-9137-47d7-ab62-ae47f646f26e-config-data\") pod \"watcher-decision-engine-0\" (UID: \"b2de1e0e-9137-47d7-ab62-ae47f646f26e\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.308785 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b2de1e0e-9137-47d7-ab62-ae47f646f26e-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"b2de1e0e-9137-47d7-ab62-ae47f646f26e\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.321910 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68b59\" (UniqueName: \"kubernetes.io/projected/b2de1e0e-9137-47d7-ab62-ae47f646f26e-kube-api-access-68b59\") pod \"watcher-decision-engine-0\" (UID: \"b2de1e0e-9137-47d7-ab62-ae47f646f26e\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.518260 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:55:23 crc kubenswrapper[4932]: W0218 19:55:23.987477 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2de1e0e_9137_47d7_ab62_ae47f646f26e.slice/crio-b3e0f2c928970f850de581a11705dd0da0f06aeadc7544afa2836c9d273eb595 WatchSource:0}: Error finding container b3e0f2c928970f850de581a11705dd0da0f06aeadc7544afa2836c9d273eb595: Status 404 returned error can't find the container with id b3e0f2c928970f850de581a11705dd0da0f06aeadc7544afa2836c9d273eb595 Feb 18 19:55:23 crc kubenswrapper[4932]: I0218 19:55:23.988671 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:55:24 crc kubenswrapper[4932]: I0218 19:55:24.089689 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b53bc70-03d1-4b04-8b5e-bf135aed16bc","Type":"ContainerStarted","Data":"ad61282ffc3a72810e8f042075ed0da414d8b82de84a0ac7c8a5f7db1e3ef9a9"} Feb 18 19:55:24 crc kubenswrapper[4932]: I0218 19:55:24.091007 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"b2de1e0e-9137-47d7-ab62-ae47f646f26e","Type":"ContainerStarted","Data":"b3e0f2c928970f850de581a11705dd0da0f06aeadc7544afa2836c9d273eb595"} Feb 18 19:55:25 crc kubenswrapper[4932]: I0218 19:55:25.104607 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"b2de1e0e-9137-47d7-ab62-ae47f646f26e","Type":"ContainerStarted","Data":"c3fe695810a82bae333705df40e3a822375c89eaf4ee576f93fc93a553eaaf04"} Feb 18 19:55:25 crc kubenswrapper[4932]: I0218 19:55:25.123722 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.123702261 podStartE2EDuration="2.123702261s" podCreationTimestamp="2026-02-18 19:55:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:55:25.120581064 +0000 UTC m=+1288.702535909" watchObservedRunningTime="2026-02-18 19:55:25.123702261 +0000 UTC m=+1288.705657116" Feb 18 19:55:26 crc kubenswrapper[4932]: I0218 19:55:26.116565 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b53bc70-03d1-4b04-8b5e-bf135aed16bc","Type":"ContainerStarted","Data":"004980b5d8abc3f9855902198b1efeb131117fc949d4f8c4b5c8b8b2d74e77fe"} Feb 18 19:55:26 crc kubenswrapper[4932]: I0218 19:55:26.142459 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.557485598 podStartE2EDuration="5.142437133s" podCreationTimestamp="2026-02-18 19:55:21 +0000 UTC" firstStartedPulling="2026-02-18 19:55:21.932083569 +0000 UTC m=+1285.514038414" lastFinishedPulling="2026-02-18 19:55:25.517035094 +0000 UTC m=+1289.098989949" observedRunningTime="2026-02-18 19:55:26.135305286 +0000 UTC m=+1289.717260131" watchObservedRunningTime="2026-02-18 19:55:26.142437133 +0000 UTC m=+1289.724391978" Feb 18 19:55:27 crc kubenswrapper[4932]: I0218 19:55:27.127811 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:55:27 crc kubenswrapper[4932]: I0218 19:55:27.605632 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:55:27 crc kubenswrapper[4932]: I0218 19:55:27.605995 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:55:29 crc kubenswrapper[4932]: I0218 19:55:29.877822 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:55:29 crc kubenswrapper[4932]: I0218 19:55:29.878358 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="67750e31-ed62-4908-9b56-3a46be936224" containerName="glance-log" containerID="cri-o://58449f068ea443fd840aa17c5a640ee0e5ae861f046a6ea06594d638db518b63" gracePeriod=30 Feb 18 19:55:29 crc kubenswrapper[4932]: I0218 19:55:29.878429 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="67750e31-ed62-4908-9b56-3a46be936224" containerName="glance-httpd" containerID="cri-o://fd324d05ae668c3f684220e361c41b6ff46379462c08ea7c413014fe4a371e37" gracePeriod=30 Feb 18 19:55:30 crc kubenswrapper[4932]: I0218 19:55:30.160059 4932 generic.go:334] "Generic (PLEG): container finished" podID="67750e31-ed62-4908-9b56-3a46be936224" containerID="58449f068ea443fd840aa17c5a640ee0e5ae861f046a6ea06594d638db518b63" exitCode=143 Feb 18 19:55:30 crc kubenswrapper[4932]: I0218 19:55:30.160116 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67750e31-ed62-4908-9b56-3a46be936224","Type":"ContainerDied","Data":"58449f068ea443fd840aa17c5a640ee0e5ae861f046a6ea06594d638db518b63"} Feb 18 19:55:30 crc kubenswrapper[4932]: I0218 19:55:30.525399 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:30 crc kubenswrapper[4932]: I0218 19:55:30.525661 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerName="ceilometer-central-agent" containerID="cri-o://352d9d060068cebd5ee94ed059873c579e4c314ed02a4f51b120c4e46c462b6a" gracePeriod=30 Feb 18 19:55:30 crc kubenswrapper[4932]: I0218 19:55:30.525721 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerName="sg-core" containerID="cri-o://ad61282ffc3a72810e8f042075ed0da414d8b82de84a0ac7c8a5f7db1e3ef9a9" gracePeriod=30 Feb 18 19:55:30 crc kubenswrapper[4932]: I0218 19:55:30.525769 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerName="proxy-httpd" containerID="cri-o://004980b5d8abc3f9855902198b1efeb131117fc949d4f8c4b5c8b8b2d74e77fe" gracePeriod=30 Feb 18 19:55:30 crc kubenswrapper[4932]: I0218 19:55:30.525779 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerName="ceilometer-notification-agent" containerID="cri-o://7acb15159d4595ba439a2fb0bc1f02945c077e1367284777296d41f6db4c2909" gracePeriod=30 Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.160925 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.161396 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="bdfd208a-d781-4471-aa15-5fcbb592ec07" containerName="glance-log" containerID="cri-o://ec4505e85a78c60e725484af01a4d51a03ebf66c4a5ad9b030f60b812e85e4e3" gracePeriod=30 Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.161950 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="bdfd208a-d781-4471-aa15-5fcbb592ec07" containerName="glance-httpd" containerID="cri-o://99077386a5dc37e2145b33681651b019f28beed715374edd046c2366a76b2af6" gracePeriod=30 Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.187009 4932 generic.go:334] "Generic (PLEG): container finished" podID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerID="004980b5d8abc3f9855902198b1efeb131117fc949d4f8c4b5c8b8b2d74e77fe" exitCode=0 Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.187036 4932 generic.go:334] "Generic (PLEG): container finished" podID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerID="ad61282ffc3a72810e8f042075ed0da414d8b82de84a0ac7c8a5f7db1e3ef9a9" exitCode=2 Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.192522 4932 generic.go:334] "Generic (PLEG): container finished" podID="67750e31-ed62-4908-9b56-3a46be936224" containerID="fd324d05ae668c3f684220e361c41b6ff46379462c08ea7c413014fe4a371e37" exitCode=0 Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.198041 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b53bc70-03d1-4b04-8b5e-bf135aed16bc","Type":"ContainerDied","Data":"004980b5d8abc3f9855902198b1efeb131117fc949d4f8c4b5c8b8b2d74e77fe"} Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.198078 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b53bc70-03d1-4b04-8b5e-bf135aed16bc","Type":"ContainerDied","Data":"ad61282ffc3a72810e8f042075ed0da414d8b82de84a0ac7c8a5f7db1e3ef9a9"} Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.198089 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67750e31-ed62-4908-9b56-3a46be936224","Type":"ContainerDied","Data":"fd324d05ae668c3f684220e361c41b6ff46379462c08ea7c413014fe4a371e37"} Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.363768 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.450646 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67750e31-ed62-4908-9b56-3a46be936224-logs\") pod \"67750e31-ed62-4908-9b56-3a46be936224\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.450921 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-combined-ca-bundle\") pod \"67750e31-ed62-4908-9b56-3a46be936224\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.450948 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-scripts\") pod \"67750e31-ed62-4908-9b56-3a46be936224\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.450987 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"67750e31-ed62-4908-9b56-3a46be936224\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.451302 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67750e31-ed62-4908-9b56-3a46be936224-logs" (OuterVolumeSpecName: "logs") pod "67750e31-ed62-4908-9b56-3a46be936224" (UID: "67750e31-ed62-4908-9b56-3a46be936224"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.451794 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bxxv\" (UniqueName: \"kubernetes.io/projected/67750e31-ed62-4908-9b56-3a46be936224-kube-api-access-2bxxv\") pod \"67750e31-ed62-4908-9b56-3a46be936224\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.451826 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-config-data\") pod \"67750e31-ed62-4908-9b56-3a46be936224\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.451885 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-public-tls-certs\") pod \"67750e31-ed62-4908-9b56-3a46be936224\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.451908 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67750e31-ed62-4908-9b56-3a46be936224-httpd-run\") pod \"67750e31-ed62-4908-9b56-3a46be936224\" (UID: \"67750e31-ed62-4908-9b56-3a46be936224\") " Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.452399 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67750e31-ed62-4908-9b56-3a46be936224-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.452566 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67750e31-ed62-4908-9b56-3a46be936224-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "67750e31-ed62-4908-9b56-3a46be936224" (UID: "67750e31-ed62-4908-9b56-3a46be936224"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.465404 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "67750e31-ed62-4908-9b56-3a46be936224" (UID: "67750e31-ed62-4908-9b56-3a46be936224"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.468898 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-scripts" (OuterVolumeSpecName: "scripts") pod "67750e31-ed62-4908-9b56-3a46be936224" (UID: "67750e31-ed62-4908-9b56-3a46be936224"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.470380 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67750e31-ed62-4908-9b56-3a46be936224-kube-api-access-2bxxv" (OuterVolumeSpecName: "kube-api-access-2bxxv") pod "67750e31-ed62-4908-9b56-3a46be936224" (UID: "67750e31-ed62-4908-9b56-3a46be936224"). InnerVolumeSpecName "kube-api-access-2bxxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.503353 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67750e31-ed62-4908-9b56-3a46be936224" (UID: "67750e31-ed62-4908-9b56-3a46be936224"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.530658 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "67750e31-ed62-4908-9b56-3a46be936224" (UID: "67750e31-ed62-4908-9b56-3a46be936224"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.554670 4932 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.554712 4932 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67750e31-ed62-4908-9b56-3a46be936224-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.554722 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.554736 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.554764 4932 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.554775 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bxxv\" (UniqueName: \"kubernetes.io/projected/67750e31-ed62-4908-9b56-3a46be936224-kube-api-access-2bxxv\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.562740 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-config-data" (OuterVolumeSpecName: "config-data") pod "67750e31-ed62-4908-9b56-3a46be936224" (UID: "67750e31-ed62-4908-9b56-3a46be936224"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.582850 4932 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.656120 4932 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:31 crc kubenswrapper[4932]: I0218 19:55:31.656156 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67750e31-ed62-4908-9b56-3a46be936224-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.218911 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67750e31-ed62-4908-9b56-3a46be936224","Type":"ContainerDied","Data":"49ded8c61eff3d7eb04054517499be8ecf50df374bdd44a32ed528213544141a"} Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.219010 4932 scope.go:117] "RemoveContainer" containerID="fd324d05ae668c3f684220e361c41b6ff46379462c08ea7c413014fe4a371e37" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.218952 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.224071 4932 generic.go:334] "Generic (PLEG): container finished" podID="bdfd208a-d781-4471-aa15-5fcbb592ec07" containerID="99077386a5dc37e2145b33681651b019f28beed715374edd046c2366a76b2af6" exitCode=0 Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.224107 4932 generic.go:334] "Generic (PLEG): container finished" podID="bdfd208a-d781-4471-aa15-5fcbb592ec07" containerID="ec4505e85a78c60e725484af01a4d51a03ebf66c4a5ad9b030f60b812e85e4e3" exitCode=143 Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.224155 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bdfd208a-d781-4471-aa15-5fcbb592ec07","Type":"ContainerDied","Data":"99077386a5dc37e2145b33681651b019f28beed715374edd046c2366a76b2af6"} Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.224212 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bdfd208a-d781-4471-aa15-5fcbb592ec07","Type":"ContainerDied","Data":"ec4505e85a78c60e725484af01a4d51a03ebf66c4a5ad9b030f60b812e85e4e3"} Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.236330 4932 generic.go:334] "Generic (PLEG): container finished" podID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerID="7acb15159d4595ba439a2fb0bc1f02945c077e1367284777296d41f6db4c2909" exitCode=0 Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.236374 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b53bc70-03d1-4b04-8b5e-bf135aed16bc","Type":"ContainerDied","Data":"7acb15159d4595ba439a2fb0bc1f02945c077e1367284777296d41f6db4c2909"} Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.289352 4932 scope.go:117] "RemoveContainer" containerID="58449f068ea443fd840aa17c5a640ee0e5ae861f046a6ea06594d638db518b63" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.293436 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.312719 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.338205 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:55:32 crc kubenswrapper[4932]: E0218 19:55:32.339664 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67750e31-ed62-4908-9b56-3a46be936224" containerName="glance-log" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.339686 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="67750e31-ed62-4908-9b56-3a46be936224" containerName="glance-log" Feb 18 19:55:32 crc kubenswrapper[4932]: E0218 19:55:32.339715 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67750e31-ed62-4908-9b56-3a46be936224" containerName="glance-httpd" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.339722 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="67750e31-ed62-4908-9b56-3a46be936224" containerName="glance-httpd" Feb 18 19:55:32 crc kubenswrapper[4932]: E0218 19:55:32.339753 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" containerName="watcher-decision-engine" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.339760 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" containerName="watcher-decision-engine" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.340951 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="67750e31-ed62-4908-9b56-3a46be936224" containerName="glance-httpd" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.340996 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="0882c686-1b07-4ac7-a6be-148eff7faa19" containerName="watcher-decision-engine" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.341036 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="67750e31-ed62-4908-9b56-3a46be936224" containerName="glance-log" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.348069 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.357764 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.362705 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.382485 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.484131 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-logs\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.484219 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljbkd\" (UniqueName: \"kubernetes.io/projected/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-kube-api-access-ljbkd\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.484298 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.484332 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-scripts\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.484374 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.484423 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.484489 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.484592 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-config-data\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.586286 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.586400 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-config-data\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.586465 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-logs\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.586513 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljbkd\" (UniqueName: \"kubernetes.io/projected/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-kube-api-access-ljbkd\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.586574 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.586604 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-scripts\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.586661 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.586696 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.587078 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.587705 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-logs\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.588286 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.592411 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-scripts\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.593316 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-config-data\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.604743 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.606557 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.607633 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljbkd\" (UniqueName: \"kubernetes.io/projected/8ef7cbfe-936d-4b0d-92c7-f61b7b89a735-kube-api-access-ljbkd\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.642440 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735\") " pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.674372 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.797269 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.890582 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdfd208a-d781-4471-aa15-5fcbb592ec07-logs\") pod \"bdfd208a-d781-4471-aa15-5fcbb592ec07\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.890643 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-config-data\") pod \"bdfd208a-d781-4471-aa15-5fcbb592ec07\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.890663 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4vtw\" (UniqueName: \"kubernetes.io/projected/bdfd208a-d781-4471-aa15-5fcbb592ec07-kube-api-access-m4vtw\") pod \"bdfd208a-d781-4471-aa15-5fcbb592ec07\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.890695 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bdfd208a-d781-4471-aa15-5fcbb592ec07-httpd-run\") pod \"bdfd208a-d781-4471-aa15-5fcbb592ec07\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.890770 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"bdfd208a-d781-4471-aa15-5fcbb592ec07\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.890800 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-combined-ca-bundle\") pod \"bdfd208a-d781-4471-aa15-5fcbb592ec07\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.890834 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-scripts\") pod \"bdfd208a-d781-4471-aa15-5fcbb592ec07\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.890982 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-internal-tls-certs\") pod \"bdfd208a-d781-4471-aa15-5fcbb592ec07\" (UID: \"bdfd208a-d781-4471-aa15-5fcbb592ec07\") " Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.891826 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdfd208a-d781-4471-aa15-5fcbb592ec07-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "bdfd208a-d781-4471-aa15-5fcbb592ec07" (UID: "bdfd208a-d781-4471-aa15-5fcbb592ec07"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.892059 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdfd208a-d781-4471-aa15-5fcbb592ec07-logs" (OuterVolumeSpecName: "logs") pod "bdfd208a-d781-4471-aa15-5fcbb592ec07" (UID: "bdfd208a-d781-4471-aa15-5fcbb592ec07"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.906638 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-scripts" (OuterVolumeSpecName: "scripts") pod "bdfd208a-d781-4471-aa15-5fcbb592ec07" (UID: "bdfd208a-d781-4471-aa15-5fcbb592ec07"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.910218 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "bdfd208a-d781-4471-aa15-5fcbb592ec07" (UID: "bdfd208a-d781-4471-aa15-5fcbb592ec07"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.910403 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdfd208a-d781-4471-aa15-5fcbb592ec07-kube-api-access-m4vtw" (OuterVolumeSpecName: "kube-api-access-m4vtw") pod "bdfd208a-d781-4471-aa15-5fcbb592ec07" (UID: "bdfd208a-d781-4471-aa15-5fcbb592ec07"). InnerVolumeSpecName "kube-api-access-m4vtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.941268 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bdfd208a-d781-4471-aa15-5fcbb592ec07" (UID: "bdfd208a-d781-4471-aa15-5fcbb592ec07"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.954394 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-config-data" (OuterVolumeSpecName: "config-data") pod "bdfd208a-d781-4471-aa15-5fcbb592ec07" (UID: "bdfd208a-d781-4471-aa15-5fcbb592ec07"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.986035 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bdfd208a-d781-4471-aa15-5fcbb592ec07" (UID: "bdfd208a-d781-4471-aa15-5fcbb592ec07"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.992923 4932 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.992953 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdfd208a-d781-4471-aa15-5fcbb592ec07-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.992963 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.992971 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4vtw\" (UniqueName: \"kubernetes.io/projected/bdfd208a-d781-4471-aa15-5fcbb592ec07-kube-api-access-m4vtw\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.992982 4932 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bdfd208a-d781-4471-aa15-5fcbb592ec07-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.993011 4932 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.993020 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:32 crc kubenswrapper[4932]: I0218 19:55:32.993028 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdfd208a-d781-4471-aa15-5fcbb592ec07-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.017538 4932 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.097094 4932 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.212633 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67750e31-ed62-4908-9b56-3a46be936224" path="/var/lib/kubelet/pods/67750e31-ed62-4908-9b56-3a46be936224/volumes" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.250637 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bdfd208a-d781-4471-aa15-5fcbb592ec07","Type":"ContainerDied","Data":"1fd189f5734df90d29419c8abecc4af71db32a09c9c7fb47958213aa32db2369"} Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.250699 4932 scope.go:117] "RemoveContainer" containerID="99077386a5dc37e2145b33681651b019f28beed715374edd046c2366a76b2af6" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.250830 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.257435 4932 generic.go:334] "Generic (PLEG): container finished" podID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerID="352d9d060068cebd5ee94ed059873c579e4c314ed02a4f51b120c4e46c462b6a" exitCode=0 Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.257549 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b53bc70-03d1-4b04-8b5e-bf135aed16bc","Type":"ContainerDied","Data":"352d9d060068cebd5ee94ed059873c579e4c314ed02a4f51b120c4e46c462b6a"} Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.274446 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.285290 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.292807 4932 scope.go:117] "RemoveContainer" containerID="ec4505e85a78c60e725484af01a4d51a03ebf66c4a5ad9b030f60b812e85e4e3" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.299134 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.313141 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:55:33 crc kubenswrapper[4932]: E0218 19:55:33.313605 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdfd208a-d781-4471-aa15-5fcbb592ec07" containerName="glance-log" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.313626 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdfd208a-d781-4471-aa15-5fcbb592ec07" containerName="glance-log" Feb 18 19:55:33 crc kubenswrapper[4932]: E0218 19:55:33.313659 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdfd208a-d781-4471-aa15-5fcbb592ec07" containerName="glance-httpd" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.313666 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdfd208a-d781-4471-aa15-5fcbb592ec07" containerName="glance-httpd" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.316746 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdfd208a-d781-4471-aa15-5fcbb592ec07" containerName="glance-httpd" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.316778 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdfd208a-d781-4471-aa15-5fcbb592ec07" containerName="glance-log" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.319394 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.323802 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.324850 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.326984 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.403268 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.403323 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb2c5249-4fcd-404d-8eac-551e66fb93d0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.403358 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb2c5249-4fcd-404d-8eac-551e66fb93d0-logs\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.403378 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb2c5249-4fcd-404d-8eac-551e66fb93d0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.403416 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krrdf\" (UniqueName: \"kubernetes.io/projected/cb2c5249-4fcd-404d-8eac-551e66fb93d0-kube-api-access-krrdf\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.403505 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb2c5249-4fcd-404d-8eac-551e66fb93d0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.403539 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb2c5249-4fcd-404d-8eac-551e66fb93d0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.403655 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cb2c5249-4fcd-404d-8eac-551e66fb93d0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.505041 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.505377 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb2c5249-4fcd-404d-8eac-551e66fb93d0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.505405 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb2c5249-4fcd-404d-8eac-551e66fb93d0-logs\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.505423 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb2c5249-4fcd-404d-8eac-551e66fb93d0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.505474 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krrdf\" (UniqueName: \"kubernetes.io/projected/cb2c5249-4fcd-404d-8eac-551e66fb93d0-kube-api-access-krrdf\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.505513 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb2c5249-4fcd-404d-8eac-551e66fb93d0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.505548 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb2c5249-4fcd-404d-8eac-551e66fb93d0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.505580 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cb2c5249-4fcd-404d-8eac-551e66fb93d0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.505749 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.506078 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cb2c5249-4fcd-404d-8eac-551e66fb93d0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.506444 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb2c5249-4fcd-404d-8eac-551e66fb93d0-logs\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.513906 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb2c5249-4fcd-404d-8eac-551e66fb93d0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.514400 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb2c5249-4fcd-404d-8eac-551e66fb93d0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.516378 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb2c5249-4fcd-404d-8eac-551e66fb93d0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.517191 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb2c5249-4fcd-404d-8eac-551e66fb93d0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.518547 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.522585 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krrdf\" (UniqueName: \"kubernetes.io/projected/cb2c5249-4fcd-404d-8eac-551e66fb93d0-kube-api-access-krrdf\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.543004 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"cb2c5249-4fcd-404d-8eac-551e66fb93d0\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.562726 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.570980 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.673438 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.708827 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-run-httpd\") pod \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.708886 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-sg-core-conf-yaml\") pod \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.708979 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-scripts\") pod \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.709044 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-log-httpd\") pod \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.709063 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlbvv\" (UniqueName: \"kubernetes.io/projected/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-kube-api-access-dlbvv\") pod \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.709088 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-combined-ca-bundle\") pod \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.709111 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-config-data\") pod \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\" (UID: \"2b53bc70-03d1-4b04-8b5e-bf135aed16bc\") " Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.709126 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2b53bc70-03d1-4b04-8b5e-bf135aed16bc" (UID: "2b53bc70-03d1-4b04-8b5e-bf135aed16bc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.709371 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2b53bc70-03d1-4b04-8b5e-bf135aed16bc" (UID: "2b53bc70-03d1-4b04-8b5e-bf135aed16bc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.709526 4932 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.709538 4932 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.713622 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-kube-api-access-dlbvv" (OuterVolumeSpecName: "kube-api-access-dlbvv") pod "2b53bc70-03d1-4b04-8b5e-bf135aed16bc" (UID: "2b53bc70-03d1-4b04-8b5e-bf135aed16bc"). InnerVolumeSpecName "kube-api-access-dlbvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.719388 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-scripts" (OuterVolumeSpecName: "scripts") pod "2b53bc70-03d1-4b04-8b5e-bf135aed16bc" (UID: "2b53bc70-03d1-4b04-8b5e-bf135aed16bc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.750135 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2b53bc70-03d1-4b04-8b5e-bf135aed16bc" (UID: "2b53bc70-03d1-4b04-8b5e-bf135aed16bc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.811537 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.811567 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlbvv\" (UniqueName: \"kubernetes.io/projected/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-kube-api-access-dlbvv\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.811582 4932 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.838455 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-config-data" (OuterVolumeSpecName: "config-data") pod "2b53bc70-03d1-4b04-8b5e-bf135aed16bc" (UID: "2b53bc70-03d1-4b04-8b5e-bf135aed16bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.855338 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b53bc70-03d1-4b04-8b5e-bf135aed16bc" (UID: "2b53bc70-03d1-4b04-8b5e-bf135aed16bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.912900 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:33 crc kubenswrapper[4932]: I0218 19:55:33.912934 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b53bc70-03d1-4b04-8b5e-bf135aed16bc-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.223111 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:55:34 crc kubenswrapper[4932]: W0218 19:55:34.229505 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb2c5249_4fcd_404d_8eac_551e66fb93d0.slice/crio-786aae7086b1df804de18794f8547263047420f36b6c9f086226cf52d2a4440a WatchSource:0}: Error finding container 786aae7086b1df804de18794f8547263047420f36b6c9f086226cf52d2a4440a: Status 404 returned error can't find the container with id 786aae7086b1df804de18794f8547263047420f36b6c9f086226cf52d2a4440a Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.290476 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735","Type":"ContainerStarted","Data":"25fe70ebf103b6cadc41b1f91ef6f1dd1a5a7a4e24f1d3d3fe196fdcf098d722"} Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.290555 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735","Type":"ContainerStarted","Data":"142cd912c87187d95f5584853db611262026e0a8153b39013ff8a7a9378cbfed"} Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.292435 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cb2c5249-4fcd-404d-8eac-551e66fb93d0","Type":"ContainerStarted","Data":"786aae7086b1df804de18794f8547263047420f36b6c9f086226cf52d2a4440a"} Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.296458 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b53bc70-03d1-4b04-8b5e-bf135aed16bc","Type":"ContainerDied","Data":"645c7a2022539be3402561398dbaa367b877bd373e87043a2a883b26e04638ba"} Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.296492 4932 scope.go:117] "RemoveContainer" containerID="004980b5d8abc3f9855902198b1efeb131117fc949d4f8c4b5c8b8b2d74e77fe" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.296599 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.302896 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.343489 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.381997 4932 scope.go:117] "RemoveContainer" containerID="ad61282ffc3a72810e8f042075ed0da414d8b82de84a0ac7c8a5f7db1e3ef9a9" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.431047 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.462033 4932 scope.go:117] "RemoveContainer" containerID="7acb15159d4595ba439a2fb0bc1f02945c077e1367284777296d41f6db4c2909" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.475437 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.499914 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:34 crc kubenswrapper[4932]: E0218 19:55:34.500313 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerName="proxy-httpd" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.500333 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerName="proxy-httpd" Feb 18 19:55:34 crc kubenswrapper[4932]: E0218 19:55:34.500356 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerName="ceilometer-central-agent" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.500362 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerName="ceilometer-central-agent" Feb 18 19:55:34 crc kubenswrapper[4932]: E0218 19:55:34.500381 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerName="ceilometer-notification-agent" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.500387 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerName="ceilometer-notification-agent" Feb 18 19:55:34 crc kubenswrapper[4932]: E0218 19:55:34.500402 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerName="sg-core" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.500409 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerName="sg-core" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.500580 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerName="proxy-httpd" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.500598 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerName="ceilometer-notification-agent" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.500614 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerName="sg-core" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.500622 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" containerName="ceilometer-central-agent" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.502628 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.504669 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.505229 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.511832 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.521656 4932 scope.go:117] "RemoveContainer" containerID="352d9d060068cebd5ee94ed059873c579e4c314ed02a4f51b120c4e46c462b6a" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.634360 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f22c0acb-8789-4ba1-8e45-8e456165db99-log-httpd\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.634520 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.634552 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.634583 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5x86\" (UniqueName: \"kubernetes.io/projected/f22c0acb-8789-4ba1-8e45-8e456165db99-kube-api-access-k5x86\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.634630 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f22c0acb-8789-4ba1-8e45-8e456165db99-run-httpd\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.634884 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-config-data\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.634968 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-scripts\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.736837 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5x86\" (UniqueName: \"kubernetes.io/projected/f22c0acb-8789-4ba1-8e45-8e456165db99-kube-api-access-k5x86\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.736932 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f22c0acb-8789-4ba1-8e45-8e456165db99-run-httpd\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.736962 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-config-data\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.736984 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-scripts\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.737026 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f22c0acb-8789-4ba1-8e45-8e456165db99-log-httpd\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.737112 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.737129 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.738337 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f22c0acb-8789-4ba1-8e45-8e456165db99-log-httpd\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.738354 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f22c0acb-8789-4ba1-8e45-8e456165db99-run-httpd\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.744432 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.746919 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-scripts\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.747089 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-config-data\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.754243 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5x86\" (UniqueName: \"kubernetes.io/projected/f22c0acb-8789-4ba1-8e45-8e456165db99-kube-api-access-k5x86\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.759058 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " pod="openstack/ceilometer-0" Feb 18 19:55:34 crc kubenswrapper[4932]: I0218 19:55:34.827290 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:55:35 crc kubenswrapper[4932]: I0218 19:55:35.201934 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b53bc70-03d1-4b04-8b5e-bf135aed16bc" path="/var/lib/kubelet/pods/2b53bc70-03d1-4b04-8b5e-bf135aed16bc/volumes" Feb 18 19:55:35 crc kubenswrapper[4932]: I0218 19:55:35.203440 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdfd208a-d781-4471-aa15-5fcbb592ec07" path="/var/lib/kubelet/pods/bdfd208a-d781-4471-aa15-5fcbb592ec07/volumes" Feb 18 19:55:35 crc kubenswrapper[4932]: I0218 19:55:35.288121 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:55:35 crc kubenswrapper[4932]: I0218 19:55:35.382704 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8ef7cbfe-936d-4b0d-92c7-f61b7b89a735","Type":"ContainerStarted","Data":"179316e850b98a1ff2ef03813dbe9079b39fb09e9b14ba7f8ff6facb6fd83f93"} Feb 18 19:55:35 crc kubenswrapper[4932]: I0218 19:55:35.398706 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cb2c5249-4fcd-404d-8eac-551e66fb93d0","Type":"ContainerStarted","Data":"8cc6ec2054f165251b0303c9d801ed55f1494727efd6a488b1607afbef5447eb"} Feb 18 19:55:35 crc kubenswrapper[4932]: I0218 19:55:35.424501 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.424478989 podStartE2EDuration="3.424478989s" podCreationTimestamp="2026-02-18 19:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:55:35.409585739 +0000 UTC m=+1298.991540594" watchObservedRunningTime="2026-02-18 19:55:35.424478989 +0000 UTC m=+1299.006433824" Feb 18 19:55:36 crc kubenswrapper[4932]: I0218 19:55:36.414280 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f22c0acb-8789-4ba1-8e45-8e456165db99","Type":"ContainerStarted","Data":"4f612b79e40b95e6fef0e37a0198be25f5c486cd3ca03eaa4c43b2840baeb770"} Feb 18 19:55:36 crc kubenswrapper[4932]: I0218 19:55:36.414814 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f22c0acb-8789-4ba1-8e45-8e456165db99","Type":"ContainerStarted","Data":"a42357a04eca2427447a527f9b884286ac30d97b8bf59de7d2cd9869618e566a"} Feb 18 19:55:36 crc kubenswrapper[4932]: I0218 19:55:36.414827 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f22c0acb-8789-4ba1-8e45-8e456165db99","Type":"ContainerStarted","Data":"a620663d217c47ddd4628558591f9269acb00cf7b394dcbb5dec8251391d19e8"} Feb 18 19:55:36 crc kubenswrapper[4932]: I0218 19:55:36.416459 4932 generic.go:334] "Generic (PLEG): container finished" podID="c88334ec-64f6-41ba-aee5-d5323e8c0c25" containerID="4e7866a2ddd0a42f76d440fa6b1c16f63d3f4f13968f3f538f0dc810522b826b" exitCode=0 Feb 18 19:55:36 crc kubenswrapper[4932]: I0218 19:55:36.416546 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-64b8m" event={"ID":"c88334ec-64f6-41ba-aee5-d5323e8c0c25","Type":"ContainerDied","Data":"4e7866a2ddd0a42f76d440fa6b1c16f63d3f4f13968f3f538f0dc810522b826b"} Feb 18 19:55:36 crc kubenswrapper[4932]: I0218 19:55:36.419135 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cb2c5249-4fcd-404d-8eac-551e66fb93d0","Type":"ContainerStarted","Data":"28346a7ce5cb4a646795e2f1b49dd135ef4701e8b2af33723542571933b8aee2"} Feb 18 19:55:37 crc kubenswrapper[4932]: I0218 19:55:37.233907 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.233882866 podStartE2EDuration="4.233882866s" podCreationTimestamp="2026-02-18 19:55:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:55:36.462577359 +0000 UTC m=+1300.044532204" watchObservedRunningTime="2026-02-18 19:55:37.233882866 +0000 UTC m=+1300.815837721" Feb 18 19:55:37 crc kubenswrapper[4932]: I0218 19:55:37.454215 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f22c0acb-8789-4ba1-8e45-8e456165db99","Type":"ContainerStarted","Data":"2697d0543e7fe8649877a6210966590083b7e47b807f2346f64c28d10d502f59"} Feb 18 19:55:37 crc kubenswrapper[4932]: I0218 19:55:37.799903 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-64b8m" Feb 18 19:55:37 crc kubenswrapper[4932]: I0218 19:55:37.918682 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-config-data\") pod \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\" (UID: \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\") " Feb 18 19:55:37 crc kubenswrapper[4932]: I0218 19:55:37.918800 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-combined-ca-bundle\") pod \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\" (UID: \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\") " Feb 18 19:55:37 crc kubenswrapper[4932]: I0218 19:55:37.918867 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-scripts\") pod \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\" (UID: \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\") " Feb 18 19:55:37 crc kubenswrapper[4932]: I0218 19:55:37.919026 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m25dj\" (UniqueName: \"kubernetes.io/projected/c88334ec-64f6-41ba-aee5-d5323e8c0c25-kube-api-access-m25dj\") pod \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\" (UID: \"c88334ec-64f6-41ba-aee5-d5323e8c0c25\") " Feb 18 19:55:37 crc kubenswrapper[4932]: I0218 19:55:37.926833 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c88334ec-64f6-41ba-aee5-d5323e8c0c25-kube-api-access-m25dj" (OuterVolumeSpecName: "kube-api-access-m25dj") pod "c88334ec-64f6-41ba-aee5-d5323e8c0c25" (UID: "c88334ec-64f6-41ba-aee5-d5323e8c0c25"). InnerVolumeSpecName "kube-api-access-m25dj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:37 crc kubenswrapper[4932]: I0218 19:55:37.942616 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-scripts" (OuterVolumeSpecName: "scripts") pod "c88334ec-64f6-41ba-aee5-d5323e8c0c25" (UID: "c88334ec-64f6-41ba-aee5-d5323e8c0c25"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:37 crc kubenswrapper[4932]: I0218 19:55:37.963445 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c88334ec-64f6-41ba-aee5-d5323e8c0c25" (UID: "c88334ec-64f6-41ba-aee5-d5323e8c0c25"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:37 crc kubenswrapper[4932]: I0218 19:55:37.965541 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-config-data" (OuterVolumeSpecName: "config-data") pod "c88334ec-64f6-41ba-aee5-d5323e8c0c25" (UID: "c88334ec-64f6-41ba-aee5-d5323e8c0c25"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.020836 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m25dj\" (UniqueName: \"kubernetes.io/projected/c88334ec-64f6-41ba-aee5-d5323e8c0c25-kube-api-access-m25dj\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.021114 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.021125 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.021133 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c88334ec-64f6-41ba-aee5-d5323e8c0c25-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.503585 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-64b8m" event={"ID":"c88334ec-64f6-41ba-aee5-d5323e8c0c25","Type":"ContainerDied","Data":"956c5d87f252b7fd42789858690e26fc0ca0b5a7be8c4cc152d63b1bddc300e7"} Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.503629 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="956c5d87f252b7fd42789858690e26fc0ca0b5a7be8c4cc152d63b1bddc300e7" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.503688 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-64b8m" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.591872 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 19:55:38 crc kubenswrapper[4932]: E0218 19:55:38.592701 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c88334ec-64f6-41ba-aee5-d5323e8c0c25" containerName="nova-cell0-conductor-db-sync" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.592727 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="c88334ec-64f6-41ba-aee5-d5323e8c0c25" containerName="nova-cell0-conductor-db-sync" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.592996 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="c88334ec-64f6-41ba-aee5-d5323e8c0c25" containerName="nova-cell0-conductor-db-sync" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.593865 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.597151 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rd8q2" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.597466 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.607305 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.736439 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a4ef33d-657c-4785-9c64-7bb797728924-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7a4ef33d-657c-4785-9c64-7bb797728924\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.736521 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jckq2\" (UniqueName: \"kubernetes.io/projected/7a4ef33d-657c-4785-9c64-7bb797728924-kube-api-access-jckq2\") pod \"nova-cell0-conductor-0\" (UID: \"7a4ef33d-657c-4785-9c64-7bb797728924\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.736942 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a4ef33d-657c-4785-9c64-7bb797728924-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7a4ef33d-657c-4785-9c64-7bb797728924\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.838263 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a4ef33d-657c-4785-9c64-7bb797728924-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7a4ef33d-657c-4785-9c64-7bb797728924\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.838352 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a4ef33d-657c-4785-9c64-7bb797728924-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7a4ef33d-657c-4785-9c64-7bb797728924\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.838378 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jckq2\" (UniqueName: \"kubernetes.io/projected/7a4ef33d-657c-4785-9c64-7bb797728924-kube-api-access-jckq2\") pod \"nova-cell0-conductor-0\" (UID: \"7a4ef33d-657c-4785-9c64-7bb797728924\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.846227 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a4ef33d-657c-4785-9c64-7bb797728924-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7a4ef33d-657c-4785-9c64-7bb797728924\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.846957 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a4ef33d-657c-4785-9c64-7bb797728924-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7a4ef33d-657c-4785-9c64-7bb797728924\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.856566 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jckq2\" (UniqueName: \"kubernetes.io/projected/7a4ef33d-657c-4785-9c64-7bb797728924-kube-api-access-jckq2\") pod \"nova-cell0-conductor-0\" (UID: \"7a4ef33d-657c-4785-9c64-7bb797728924\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:55:38 crc kubenswrapper[4932]: I0218 19:55:38.925968 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 18 19:55:39 crc kubenswrapper[4932]: I0218 19:55:39.436862 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 19:55:39 crc kubenswrapper[4932]: W0218 19:55:39.439870 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a4ef33d_657c_4785_9c64_7bb797728924.slice/crio-12970eaa7b7dcfb7c55147ddac04f2e785903422742c5f3f12f8d4ab2607db59 WatchSource:0}: Error finding container 12970eaa7b7dcfb7c55147ddac04f2e785903422742c5f3f12f8d4ab2607db59: Status 404 returned error can't find the container with id 12970eaa7b7dcfb7c55147ddac04f2e785903422742c5f3f12f8d4ab2607db59 Feb 18 19:55:39 crc kubenswrapper[4932]: I0218 19:55:39.523364 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f22c0acb-8789-4ba1-8e45-8e456165db99","Type":"ContainerStarted","Data":"235fb721bf81fe59350072741d94ffeb2cb2dcf4dda7a36192f0baba9a50695d"} Feb 18 19:55:39 crc kubenswrapper[4932]: I0218 19:55:39.523634 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:55:39 crc kubenswrapper[4932]: I0218 19:55:39.532551 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7a4ef33d-657c-4785-9c64-7bb797728924","Type":"ContainerStarted","Data":"12970eaa7b7dcfb7c55147ddac04f2e785903422742c5f3f12f8d4ab2607db59"} Feb 18 19:55:39 crc kubenswrapper[4932]: I0218 19:55:39.550032 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.513347359 podStartE2EDuration="5.550013218s" podCreationTimestamp="2026-02-18 19:55:34 +0000 UTC" firstStartedPulling="2026-02-18 19:55:35.372340596 +0000 UTC m=+1298.954295441" lastFinishedPulling="2026-02-18 19:55:38.409006445 +0000 UTC m=+1301.990961300" observedRunningTime="2026-02-18 19:55:39.544745787 +0000 UTC m=+1303.126700652" watchObservedRunningTime="2026-02-18 19:55:39.550013218 +0000 UTC m=+1303.131968063" Feb 18 19:55:40 crc kubenswrapper[4932]: I0218 19:55:40.546655 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7a4ef33d-657c-4785-9c64-7bb797728924","Type":"ContainerStarted","Data":"9dc9ef7edd603b01dba71c362976e69bd74fe6c5c533cbd5ca93d7f2cf9c6180"} Feb 18 19:55:40 crc kubenswrapper[4932]: I0218 19:55:40.547775 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 18 19:55:40 crc kubenswrapper[4932]: I0218 19:55:40.591232 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.591206706 podStartE2EDuration="2.591206706s" podCreationTimestamp="2026-02-18 19:55:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:55:40.581735862 +0000 UTC m=+1304.163690757" watchObservedRunningTime="2026-02-18 19:55:40.591206706 +0000 UTC m=+1304.173161581" Feb 18 19:55:42 crc kubenswrapper[4932]: I0218 19:55:42.674836 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 19:55:42 crc kubenswrapper[4932]: I0218 19:55:42.676371 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 19:55:42 crc kubenswrapper[4932]: I0218 19:55:42.703849 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 19:55:42 crc kubenswrapper[4932]: I0218 19:55:42.723078 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 19:55:43 crc kubenswrapper[4932]: I0218 19:55:43.579950 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 19:55:43 crc kubenswrapper[4932]: I0218 19:55:43.580030 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 19:55:43 crc kubenswrapper[4932]: I0218 19:55:43.674117 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 19:55:43 crc kubenswrapper[4932]: I0218 19:55:43.674243 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 19:55:43 crc kubenswrapper[4932]: I0218 19:55:43.717761 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 19:55:43 crc kubenswrapper[4932]: I0218 19:55:43.756282 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 19:55:44 crc kubenswrapper[4932]: I0218 19:55:44.588937 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 19:55:44 crc kubenswrapper[4932]: I0218 19:55:44.588978 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 19:55:45 crc kubenswrapper[4932]: I0218 19:55:45.313973 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 19:55:45 crc kubenswrapper[4932]: I0218 19:55:45.344016 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 19:55:46 crc kubenswrapper[4932]: I0218 19:55:46.375774 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 19:55:46 crc kubenswrapper[4932]: I0218 19:55:46.380255 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 19:55:48 crc kubenswrapper[4932]: I0218 19:55:48.976837 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.488053 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-xlzdb"] Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.489324 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xlzdb" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.491923 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.492031 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.499280 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-xlzdb"] Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.556426 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-config-data\") pod \"nova-cell0-cell-mapping-xlzdb\" (UID: \"6473c7ac-af7d-4556-aa86-28aabc85694a\") " pod="openstack/nova-cell0-cell-mapping-xlzdb" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.556500 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-scripts\") pod \"nova-cell0-cell-mapping-xlzdb\" (UID: \"6473c7ac-af7d-4556-aa86-28aabc85694a\") " pod="openstack/nova-cell0-cell-mapping-xlzdb" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.556606 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xlzdb\" (UID: \"6473c7ac-af7d-4556-aa86-28aabc85694a\") " pod="openstack/nova-cell0-cell-mapping-xlzdb" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.556886 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76srz\" (UniqueName: \"kubernetes.io/projected/6473c7ac-af7d-4556-aa86-28aabc85694a-kube-api-access-76srz\") pod \"nova-cell0-cell-mapping-xlzdb\" (UID: \"6473c7ac-af7d-4556-aa86-28aabc85694a\") " pod="openstack/nova-cell0-cell-mapping-xlzdb" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.637343 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.638733 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.642611 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.658382 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76srz\" (UniqueName: \"kubernetes.io/projected/6473c7ac-af7d-4556-aa86-28aabc85694a-kube-api-access-76srz\") pod \"nova-cell0-cell-mapping-xlzdb\" (UID: \"6473c7ac-af7d-4556-aa86-28aabc85694a\") " pod="openstack/nova-cell0-cell-mapping-xlzdb" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.658451 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-config-data\") pod \"nova-cell0-cell-mapping-xlzdb\" (UID: \"6473c7ac-af7d-4556-aa86-28aabc85694a\") " pod="openstack/nova-cell0-cell-mapping-xlzdb" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.658484 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-scripts\") pod \"nova-cell0-cell-mapping-xlzdb\" (UID: \"6473c7ac-af7d-4556-aa86-28aabc85694a\") " pod="openstack/nova-cell0-cell-mapping-xlzdb" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.658531 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xlzdb\" (UID: \"6473c7ac-af7d-4556-aa86-28aabc85694a\") " pod="openstack/nova-cell0-cell-mapping-xlzdb" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.668843 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-config-data\") pod \"nova-cell0-cell-mapping-xlzdb\" (UID: \"6473c7ac-af7d-4556-aa86-28aabc85694a\") " pod="openstack/nova-cell0-cell-mapping-xlzdb" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.684899 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xlzdb\" (UID: \"6473c7ac-af7d-4556-aa86-28aabc85694a\") " pod="openstack/nova-cell0-cell-mapping-xlzdb" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.686290 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-scripts\") pod \"nova-cell0-cell-mapping-xlzdb\" (UID: \"6473c7ac-af7d-4556-aa86-28aabc85694a\") " pod="openstack/nova-cell0-cell-mapping-xlzdb" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.704097 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76srz\" (UniqueName: \"kubernetes.io/projected/6473c7ac-af7d-4556-aa86-28aabc85694a-kube-api-access-76srz\") pod \"nova-cell0-cell-mapping-xlzdb\" (UID: \"6473c7ac-af7d-4556-aa86-28aabc85694a\") " pod="openstack/nova-cell0-cell-mapping-xlzdb" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.715219 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.760296 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb946\" (UniqueName: \"kubernetes.io/projected/59185a09-938b-47ba-99ed-1b81362038e0-kube-api-access-cb946\") pod \"nova-cell1-novncproxy-0\" (UID: \"59185a09-938b-47ba-99ed-1b81362038e0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.760348 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59185a09-938b-47ba-99ed-1b81362038e0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"59185a09-938b-47ba-99ed-1b81362038e0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.760367 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59185a09-938b-47ba-99ed-1b81362038e0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"59185a09-938b-47ba-99ed-1b81362038e0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.768367 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.769538 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.772507 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.801491 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.803183 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.813420 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.819432 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xlzdb" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.864964 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e97df52-5201-479d-aae1-ac0c36e3ea63-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3e97df52-5201-479d-aae1-ac0c36e3ea63\") " pod="openstack/nova-scheduler-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.865038 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34a9bdea-8dd1-4825-971a-36c348e2a918-logs\") pod \"nova-metadata-0\" (UID: \"34a9bdea-8dd1-4825-971a-36c348e2a918\") " pod="openstack/nova-metadata-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.865065 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34a9bdea-8dd1-4825-971a-36c348e2a918-config-data\") pod \"nova-metadata-0\" (UID: \"34a9bdea-8dd1-4825-971a-36c348e2a918\") " pod="openstack/nova-metadata-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.865105 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jf96\" (UniqueName: \"kubernetes.io/projected/3e97df52-5201-479d-aae1-ac0c36e3ea63-kube-api-access-7jf96\") pod \"nova-scheduler-0\" (UID: \"3e97df52-5201-479d-aae1-ac0c36e3ea63\") " pod="openstack/nova-scheduler-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.865128 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e97df52-5201-479d-aae1-ac0c36e3ea63-config-data\") pod \"nova-scheduler-0\" (UID: \"3e97df52-5201-479d-aae1-ac0c36e3ea63\") " pod="openstack/nova-scheduler-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.865193 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb946\" (UniqueName: \"kubernetes.io/projected/59185a09-938b-47ba-99ed-1b81362038e0-kube-api-access-cb946\") pod \"nova-cell1-novncproxy-0\" (UID: \"59185a09-938b-47ba-99ed-1b81362038e0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.865224 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59185a09-938b-47ba-99ed-1b81362038e0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"59185a09-938b-47ba-99ed-1b81362038e0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.865246 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59185a09-938b-47ba-99ed-1b81362038e0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"59185a09-938b-47ba-99ed-1b81362038e0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.865269 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzznz\" (UniqueName: \"kubernetes.io/projected/34a9bdea-8dd1-4825-971a-36c348e2a918-kube-api-access-vzznz\") pod \"nova-metadata-0\" (UID: \"34a9bdea-8dd1-4825-971a-36c348e2a918\") " pod="openstack/nova-metadata-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.865300 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a9bdea-8dd1-4825-971a-36c348e2a918-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"34a9bdea-8dd1-4825-971a-36c348e2a918\") " pod="openstack/nova-metadata-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.870575 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59185a09-938b-47ba-99ed-1b81362038e0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"59185a09-938b-47ba-99ed-1b81362038e0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.874737 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59185a09-938b-47ba-99ed-1b81362038e0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"59185a09-938b-47ba-99ed-1b81362038e0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.876325 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.944192 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.953264 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb946\" (UniqueName: \"kubernetes.io/projected/59185a09-938b-47ba-99ed-1b81362038e0-kube-api-access-cb946\") pod \"nova-cell1-novncproxy-0\" (UID: \"59185a09-938b-47ba-99ed-1b81362038e0\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.965470 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.966681 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jf96\" (UniqueName: \"kubernetes.io/projected/3e97df52-5201-479d-aae1-ac0c36e3ea63-kube-api-access-7jf96\") pod \"nova-scheduler-0\" (UID: \"3e97df52-5201-479d-aae1-ac0c36e3ea63\") " pod="openstack/nova-scheduler-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.966714 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e97df52-5201-479d-aae1-ac0c36e3ea63-config-data\") pod \"nova-scheduler-0\" (UID: \"3e97df52-5201-479d-aae1-ac0c36e3ea63\") " pod="openstack/nova-scheduler-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.966785 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzznz\" (UniqueName: \"kubernetes.io/projected/34a9bdea-8dd1-4825-971a-36c348e2a918-kube-api-access-vzznz\") pod \"nova-metadata-0\" (UID: \"34a9bdea-8dd1-4825-971a-36c348e2a918\") " pod="openstack/nova-metadata-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.966811 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a9bdea-8dd1-4825-971a-36c348e2a918-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"34a9bdea-8dd1-4825-971a-36c348e2a918\") " pod="openstack/nova-metadata-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.966856 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e97df52-5201-479d-aae1-ac0c36e3ea63-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3e97df52-5201-479d-aae1-ac0c36e3ea63\") " pod="openstack/nova-scheduler-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.966894 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34a9bdea-8dd1-4825-971a-36c348e2a918-logs\") pod \"nova-metadata-0\" (UID: \"34a9bdea-8dd1-4825-971a-36c348e2a918\") " pod="openstack/nova-metadata-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.966911 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34a9bdea-8dd1-4825-971a-36c348e2a918-config-data\") pod \"nova-metadata-0\" (UID: \"34a9bdea-8dd1-4825-971a-36c348e2a918\") " pod="openstack/nova-metadata-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.970550 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.972197 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.973441 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34a9bdea-8dd1-4825-971a-36c348e2a918-logs\") pod \"nova-metadata-0\" (UID: \"34a9bdea-8dd1-4825-971a-36c348e2a918\") " pod="openstack/nova-metadata-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.978733 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.978948 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a9bdea-8dd1-4825-971a-36c348e2a918-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"34a9bdea-8dd1-4825-971a-36c348e2a918\") " pod="openstack/nova-metadata-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.980122 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e97df52-5201-479d-aae1-ac0c36e3ea63-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3e97df52-5201-479d-aae1-ac0c36e3ea63\") " pod="openstack/nova-scheduler-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.981008 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34a9bdea-8dd1-4825-971a-36c348e2a918-config-data\") pod \"nova-metadata-0\" (UID: \"34a9bdea-8dd1-4825-971a-36c348e2a918\") " pod="openstack/nova-metadata-0" Feb 18 19:55:49 crc kubenswrapper[4932]: I0218 19:55:49.983855 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e97df52-5201-479d-aae1-ac0c36e3ea63-config-data\") pod \"nova-scheduler-0\" (UID: \"3e97df52-5201-479d-aae1-ac0c36e3ea63\") " pod="openstack/nova-scheduler-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.009801 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jf96\" (UniqueName: \"kubernetes.io/projected/3e97df52-5201-479d-aae1-ac0c36e3ea63-kube-api-access-7jf96\") pod \"nova-scheduler-0\" (UID: \"3e97df52-5201-479d-aae1-ac0c36e3ea63\") " pod="openstack/nova-scheduler-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.020934 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzznz\" (UniqueName: \"kubernetes.io/projected/34a9bdea-8dd1-4825-971a-36c348e2a918-kube-api-access-vzznz\") pod \"nova-metadata-0\" (UID: \"34a9bdea-8dd1-4825-971a-36c348e2a918\") " pod="openstack/nova-metadata-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.063336 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.068981 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a445a66f-1685-4542-89c3-012fef147a76-config-data\") pod \"nova-api-0\" (UID: \"a445a66f-1685-4542-89c3-012fef147a76\") " pod="openstack/nova-api-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.069052 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsfs8\" (UniqueName: \"kubernetes.io/projected/a445a66f-1685-4542-89c3-012fef147a76-kube-api-access-xsfs8\") pod \"nova-api-0\" (UID: \"a445a66f-1685-4542-89c3-012fef147a76\") " pod="openstack/nova-api-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.069090 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a445a66f-1685-4542-89c3-012fef147a76-logs\") pod \"nova-api-0\" (UID: \"a445a66f-1685-4542-89c3-012fef147a76\") " pod="openstack/nova-api-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.069275 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a445a66f-1685-4542-89c3-012fef147a76-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a445a66f-1685-4542-89c3-012fef147a76\") " pod="openstack/nova-api-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.086604 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-87f66f8bf-sszng"] Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.089943 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.109254 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-87f66f8bf-sszng"] Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.125640 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.167965 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.170262 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsfs8\" (UniqueName: \"kubernetes.io/projected/a445a66f-1685-4542-89c3-012fef147a76-kube-api-access-xsfs8\") pod \"nova-api-0\" (UID: \"a445a66f-1685-4542-89c3-012fef147a76\") " pod="openstack/nova-api-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.170302 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a445a66f-1685-4542-89c3-012fef147a76-logs\") pod \"nova-api-0\" (UID: \"a445a66f-1685-4542-89c3-012fef147a76\") " pod="openstack/nova-api-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.170326 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-ovsdbserver-sb\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.170352 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-dns-swift-storage-0\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.170370 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-dns-svc\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.170455 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-ovsdbserver-nb\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.170482 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjq9z\" (UniqueName: \"kubernetes.io/projected/c89ff872-244d-428a-a29c-3b9adeae5c0c-kube-api-access-cjq9z\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.170514 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-config\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.170570 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a445a66f-1685-4542-89c3-012fef147a76-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a445a66f-1685-4542-89c3-012fef147a76\") " pod="openstack/nova-api-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.170597 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a445a66f-1685-4542-89c3-012fef147a76-config-data\") pod \"nova-api-0\" (UID: \"a445a66f-1685-4542-89c3-012fef147a76\") " pod="openstack/nova-api-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.170687 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a445a66f-1685-4542-89c3-012fef147a76-logs\") pod \"nova-api-0\" (UID: \"a445a66f-1685-4542-89c3-012fef147a76\") " pod="openstack/nova-api-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.174013 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a445a66f-1685-4542-89c3-012fef147a76-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a445a66f-1685-4542-89c3-012fef147a76\") " pod="openstack/nova-api-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.174886 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a445a66f-1685-4542-89c3-012fef147a76-config-data\") pod \"nova-api-0\" (UID: \"a445a66f-1685-4542-89c3-012fef147a76\") " pod="openstack/nova-api-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.187394 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsfs8\" (UniqueName: \"kubernetes.io/projected/a445a66f-1685-4542-89c3-012fef147a76-kube-api-access-xsfs8\") pod \"nova-api-0\" (UID: \"a445a66f-1685-4542-89c3-012fef147a76\") " pod="openstack/nova-api-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.273183 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-config\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.273310 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-ovsdbserver-sb\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.273334 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-dns-swift-storage-0\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.273357 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-dns-svc\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.273417 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-ovsdbserver-nb\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.273453 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjq9z\" (UniqueName: \"kubernetes.io/projected/c89ff872-244d-428a-a29c-3b9adeae5c0c-kube-api-access-cjq9z\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.274498 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-config\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.277117 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-dns-swift-storage-0\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.277607 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-dns-svc\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.280340 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-ovsdbserver-sb\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.282982 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-ovsdbserver-nb\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.291657 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjq9z\" (UniqueName: \"kubernetes.io/projected/c89ff872-244d-428a-a29c-3b9adeae5c0c-kube-api-access-cjq9z\") pod \"dnsmasq-dns-87f66f8bf-sszng\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.361634 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.424526 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.521148 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-xlzdb"] Feb 18 19:55:50 crc kubenswrapper[4932]: W0218 19:55:50.531161 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6473c7ac_af7d_4556_aa86_28aabc85694a.slice/crio-0726c55d4787049d77ae93d959e7adf39c17f94bb13160b743ceddf464afc9d7 WatchSource:0}: Error finding container 0726c55d4787049d77ae93d959e7adf39c17f94bb13160b743ceddf464afc9d7: Status 404 returned error can't find the container with id 0726c55d4787049d77ae93d959e7adf39c17f94bb13160b743ceddf464afc9d7 Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.634939 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.660231 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-f756w"] Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.661969 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-f756w" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.667555 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.667739 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.685679 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-scripts\") pod \"nova-cell1-conductor-db-sync-f756w\" (UID: \"5d3a07cf-a084-46a0-8ca2-830e0838d575\") " pod="openstack/nova-cell1-conductor-db-sync-f756w" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.685993 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzw4x\" (UniqueName: \"kubernetes.io/projected/5d3a07cf-a084-46a0-8ca2-830e0838d575-kube-api-access-bzw4x\") pod \"nova-cell1-conductor-db-sync-f756w\" (UID: \"5d3a07cf-a084-46a0-8ca2-830e0838d575\") " pod="openstack/nova-cell1-conductor-db-sync-f756w" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.686026 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-config-data\") pod \"nova-cell1-conductor-db-sync-f756w\" (UID: \"5d3a07cf-a084-46a0-8ca2-830e0838d575\") " pod="openstack/nova-cell1-conductor-db-sync-f756w" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.686044 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-f756w\" (UID: \"5d3a07cf-a084-46a0-8ca2-830e0838d575\") " pod="openstack/nova-cell1-conductor-db-sync-f756w" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.688247 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-f756w"] Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.692565 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xlzdb" event={"ID":"6473c7ac-af7d-4556-aa86-28aabc85694a","Type":"ContainerStarted","Data":"0726c55d4787049d77ae93d959e7adf39c17f94bb13160b743ceddf464afc9d7"} Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.760148 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.787332 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.789010 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzw4x\" (UniqueName: \"kubernetes.io/projected/5d3a07cf-a084-46a0-8ca2-830e0838d575-kube-api-access-bzw4x\") pod \"nova-cell1-conductor-db-sync-f756w\" (UID: \"5d3a07cf-a084-46a0-8ca2-830e0838d575\") " pod="openstack/nova-cell1-conductor-db-sync-f756w" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.789065 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-config-data\") pod \"nova-cell1-conductor-db-sync-f756w\" (UID: \"5d3a07cf-a084-46a0-8ca2-830e0838d575\") " pod="openstack/nova-cell1-conductor-db-sync-f756w" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.789086 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-f756w\" (UID: \"5d3a07cf-a084-46a0-8ca2-830e0838d575\") " pod="openstack/nova-cell1-conductor-db-sync-f756w" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.789187 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-scripts\") pod \"nova-cell1-conductor-db-sync-f756w\" (UID: \"5d3a07cf-a084-46a0-8ca2-830e0838d575\") " pod="openstack/nova-cell1-conductor-db-sync-f756w" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.795517 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-config-data\") pod \"nova-cell1-conductor-db-sync-f756w\" (UID: \"5d3a07cf-a084-46a0-8ca2-830e0838d575\") " pod="openstack/nova-cell1-conductor-db-sync-f756w" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.799801 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-f756w\" (UID: \"5d3a07cf-a084-46a0-8ca2-830e0838d575\") " pod="openstack/nova-cell1-conductor-db-sync-f756w" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.800146 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-scripts\") pod \"nova-cell1-conductor-db-sync-f756w\" (UID: \"5d3a07cf-a084-46a0-8ca2-830e0838d575\") " pod="openstack/nova-cell1-conductor-db-sync-f756w" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.810609 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzw4x\" (UniqueName: \"kubernetes.io/projected/5d3a07cf-a084-46a0-8ca2-830e0838d575-kube-api-access-bzw4x\") pod \"nova-cell1-conductor-db-sync-f756w\" (UID: \"5d3a07cf-a084-46a0-8ca2-830e0838d575\") " pod="openstack/nova-cell1-conductor-db-sync-f756w" Feb 18 19:55:50 crc kubenswrapper[4932]: I0218 19:55:50.903312 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-f756w" Feb 18 19:55:51 crc kubenswrapper[4932]: I0218 19:55:51.019717 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:55:51 crc kubenswrapper[4932]: W0218 19:55:51.055412 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda445a66f_1685_4542_89c3_012fef147a76.slice/crio-db9aed346e952a35a560a0f801674a02f5a8f28572c2af0ca6ba733c50ec6e31 WatchSource:0}: Error finding container db9aed346e952a35a560a0f801674a02f5a8f28572c2af0ca6ba733c50ec6e31: Status 404 returned error can't find the container with id db9aed346e952a35a560a0f801674a02f5a8f28572c2af0ca6ba733c50ec6e31 Feb 18 19:55:51 crc kubenswrapper[4932]: I0218 19:55:51.233685 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-87f66f8bf-sszng"] Feb 18 19:55:51 crc kubenswrapper[4932]: I0218 19:55:51.474208 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-f756w"] Feb 18 19:55:51 crc kubenswrapper[4932]: W0218 19:55:51.503607 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d3a07cf_a084_46a0_8ca2_830e0838d575.slice/crio-1f840d02307e281dd77d4c46053f73ca0e900cd59304c1e3d8da776e2b0a46e0 WatchSource:0}: Error finding container 1f840d02307e281dd77d4c46053f73ca0e900cd59304c1e3d8da776e2b0a46e0: Status 404 returned error can't find the container with id 1f840d02307e281dd77d4c46053f73ca0e900cd59304c1e3d8da776e2b0a46e0 Feb 18 19:55:51 crc kubenswrapper[4932]: I0218 19:55:51.768264 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a445a66f-1685-4542-89c3-012fef147a76","Type":"ContainerStarted","Data":"db9aed346e952a35a560a0f801674a02f5a8f28572c2af0ca6ba733c50ec6e31"} Feb 18 19:55:51 crc kubenswrapper[4932]: I0218 19:55:51.786794 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3e97df52-5201-479d-aae1-ac0c36e3ea63","Type":"ContainerStarted","Data":"5e68c76538cd952d6f6a3dd14aebb40e0d4b05858a3c9289e0a5ad892f731528"} Feb 18 19:55:51 crc kubenswrapper[4932]: I0218 19:55:51.789819 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-f756w" event={"ID":"5d3a07cf-a084-46a0-8ca2-830e0838d575","Type":"ContainerStarted","Data":"1f840d02307e281dd77d4c46053f73ca0e900cd59304c1e3d8da776e2b0a46e0"} Feb 18 19:55:51 crc kubenswrapper[4932]: I0218 19:55:51.792509 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"59185a09-938b-47ba-99ed-1b81362038e0","Type":"ContainerStarted","Data":"372bb3654ce51919b696e3d9eb989784a6ab397c40b87b62e7e2d42b5443d7b8"} Feb 18 19:55:51 crc kubenswrapper[4932]: I0218 19:55:51.793995 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34a9bdea-8dd1-4825-971a-36c348e2a918","Type":"ContainerStarted","Data":"a03c13b667e70bffdfe5ae8206b4073cc9d064e02a2aa3bfd907faed67753e61"} Feb 18 19:55:51 crc kubenswrapper[4932]: I0218 19:55:51.795904 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xlzdb" event={"ID":"6473c7ac-af7d-4556-aa86-28aabc85694a","Type":"ContainerStarted","Data":"7ff7e9bf05a2ba3237ddc130003a316b61a512ddd8b5c858384cd739b41a1cfd"} Feb 18 19:55:51 crc kubenswrapper[4932]: I0218 19:55:51.805686 4932 generic.go:334] "Generic (PLEG): container finished" podID="c89ff872-244d-428a-a29c-3b9adeae5c0c" containerID="efa1de95f92b6f71ab718eba81f5146f37d50f46643463b88203e329ebaceb9a" exitCode=0 Feb 18 19:55:51 crc kubenswrapper[4932]: I0218 19:55:51.805744 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-87f66f8bf-sszng" event={"ID":"c89ff872-244d-428a-a29c-3b9adeae5c0c","Type":"ContainerDied","Data":"efa1de95f92b6f71ab718eba81f5146f37d50f46643463b88203e329ebaceb9a"} Feb 18 19:55:51 crc kubenswrapper[4932]: I0218 19:55:51.805775 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-87f66f8bf-sszng" event={"ID":"c89ff872-244d-428a-a29c-3b9adeae5c0c","Type":"ContainerStarted","Data":"3a5bcecade0b5dff94560cc8f3a4637b00cd9cdde3e3372019fd257bdc54822e"} Feb 18 19:55:51 crc kubenswrapper[4932]: I0218 19:55:51.823078 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-f756w" podStartSLOduration=1.823057751 podStartE2EDuration="1.823057751s" podCreationTimestamp="2026-02-18 19:55:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:55:51.812878649 +0000 UTC m=+1315.394833494" watchObservedRunningTime="2026-02-18 19:55:51.823057751 +0000 UTC m=+1315.405012596" Feb 18 19:55:51 crc kubenswrapper[4932]: I0218 19:55:51.840749 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-xlzdb" podStartSLOduration=2.8407345189999997 podStartE2EDuration="2.840734519s" podCreationTimestamp="2026-02-18 19:55:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:55:51.836472484 +0000 UTC m=+1315.418427329" watchObservedRunningTime="2026-02-18 19:55:51.840734519 +0000 UTC m=+1315.422689364" Feb 18 19:55:52 crc kubenswrapper[4932]: I0218 19:55:52.825777 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-f756w" event={"ID":"5d3a07cf-a084-46a0-8ca2-830e0838d575","Type":"ContainerStarted","Data":"9bb9eedee5db3508051ad5cf9468f19b751623f5c59dfbe177da134d00b7fc1f"} Feb 18 19:55:52 crc kubenswrapper[4932]: I0218 19:55:52.833882 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-87f66f8bf-sszng" event={"ID":"c89ff872-244d-428a-a29c-3b9adeae5c0c","Type":"ContainerStarted","Data":"3c2dd4d6051f054d8ec462a813d59a2da849d9297e15a4c7e5cbe0de8d6eca93"} Feb 18 19:55:52 crc kubenswrapper[4932]: I0218 19:55:52.863442 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-87f66f8bf-sszng" podStartSLOduration=3.863377078 podStartE2EDuration="3.863377078s" podCreationTimestamp="2026-02-18 19:55:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:55:52.855354299 +0000 UTC m=+1316.437309144" watchObservedRunningTime="2026-02-18 19:55:52.863377078 +0000 UTC m=+1316.445331923" Feb 18 19:55:53 crc kubenswrapper[4932]: I0218 19:55:53.827367 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:55:53 crc kubenswrapper[4932]: I0218 19:55:53.843181 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:55:53 crc kubenswrapper[4932]: I0218 19:55:53.862423 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:55:55 crc kubenswrapper[4932]: I0218 19:55:55.876650 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"59185a09-938b-47ba-99ed-1b81362038e0","Type":"ContainerStarted","Data":"f581b8c9ce44e42d3ff03f376a0f68bc8c6d3dd65d58f6d7b80411f3452dd5a6"} Feb 18 19:55:55 crc kubenswrapper[4932]: I0218 19:55:55.877546 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="59185a09-938b-47ba-99ed-1b81362038e0" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://f581b8c9ce44e42d3ff03f376a0f68bc8c6d3dd65d58f6d7b80411f3452dd5a6" gracePeriod=30 Feb 18 19:55:55 crc kubenswrapper[4932]: I0218 19:55:55.884780 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34a9bdea-8dd1-4825-971a-36c348e2a918","Type":"ContainerStarted","Data":"a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69"} Feb 18 19:55:55 crc kubenswrapper[4932]: I0218 19:55:55.884823 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34a9bdea-8dd1-4825-971a-36c348e2a918","Type":"ContainerStarted","Data":"3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971"} Feb 18 19:55:55 crc kubenswrapper[4932]: I0218 19:55:55.885221 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="34a9bdea-8dd1-4825-971a-36c348e2a918" containerName="nova-metadata-log" containerID="cri-o://3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971" gracePeriod=30 Feb 18 19:55:55 crc kubenswrapper[4932]: I0218 19:55:55.885426 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="34a9bdea-8dd1-4825-971a-36c348e2a918" containerName="nova-metadata-metadata" containerID="cri-o://a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69" gracePeriod=30 Feb 18 19:55:55 crc kubenswrapper[4932]: I0218 19:55:55.892801 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a445a66f-1685-4542-89c3-012fef147a76","Type":"ContainerStarted","Data":"5f7527d01c487865e2fd7be5215fbd63ac3d155a793a5a3fe14168602cf387c9"} Feb 18 19:55:55 crc kubenswrapper[4932]: I0218 19:55:55.892859 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a445a66f-1685-4542-89c3-012fef147a76","Type":"ContainerStarted","Data":"4d327303ff33eab8a6bbdcf1937931dc82eed58f45d98c26b2d9ee0e150ae6f4"} Feb 18 19:55:55 crc kubenswrapper[4932]: I0218 19:55:55.897782 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3e97df52-5201-479d-aae1-ac0c36e3ea63","Type":"ContainerStarted","Data":"dedbe68a7a28a7582468bbdba015b74411131d55f6155a9cdd24cd82e7465bf0"} Feb 18 19:55:55 crc kubenswrapper[4932]: I0218 19:55:55.961600 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.81816397 podStartE2EDuration="6.961554333s" podCreationTimestamp="2026-02-18 19:55:49 +0000 UTC" firstStartedPulling="2026-02-18 19:55:50.689861531 +0000 UTC m=+1314.271816376" lastFinishedPulling="2026-02-18 19:55:54.833251894 +0000 UTC m=+1318.415206739" observedRunningTime="2026-02-18 19:55:55.913728577 +0000 UTC m=+1319.495683412" watchObservedRunningTime="2026-02-18 19:55:55.961554333 +0000 UTC m=+1319.543509188" Feb 18 19:55:55 crc kubenswrapper[4932]: I0218 19:55:55.969849 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.204336676 podStartE2EDuration="6.969833638s" podCreationTimestamp="2026-02-18 19:55:49 +0000 UTC" firstStartedPulling="2026-02-18 19:55:51.067442384 +0000 UTC m=+1314.649397229" lastFinishedPulling="2026-02-18 19:55:54.832939346 +0000 UTC m=+1318.414894191" observedRunningTime="2026-02-18 19:55:55.963292256 +0000 UTC m=+1319.545247111" watchObservedRunningTime="2026-02-18 19:55:55.969833638 +0000 UTC m=+1319.551788483" Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.008148 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.952273656 podStartE2EDuration="7.008130218s" podCreationTimestamp="2026-02-18 19:55:49 +0000 UTC" firstStartedPulling="2026-02-18 19:55:50.780742615 +0000 UTC m=+1314.362697460" lastFinishedPulling="2026-02-18 19:55:54.836599177 +0000 UTC m=+1318.418554022" observedRunningTime="2026-02-18 19:55:55.983872726 +0000 UTC m=+1319.565827571" watchObservedRunningTime="2026-02-18 19:55:56.008130218 +0000 UTC m=+1319.590085063" Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.011993 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.956400857 podStartE2EDuration="7.011976193s" podCreationTimestamp="2026-02-18 19:55:49 +0000 UTC" firstStartedPulling="2026-02-18 19:55:50.780488188 +0000 UTC m=+1314.362443033" lastFinishedPulling="2026-02-18 19:55:54.836063524 +0000 UTC m=+1318.418018369" observedRunningTime="2026-02-18 19:55:56.006105267 +0000 UTC m=+1319.588060112" watchObservedRunningTime="2026-02-18 19:55:56.011976193 +0000 UTC m=+1319.593931038" Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.504785 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.556680 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34a9bdea-8dd1-4825-971a-36c348e2a918-config-data\") pod \"34a9bdea-8dd1-4825-971a-36c348e2a918\" (UID: \"34a9bdea-8dd1-4825-971a-36c348e2a918\") " Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.556743 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a9bdea-8dd1-4825-971a-36c348e2a918-combined-ca-bundle\") pod \"34a9bdea-8dd1-4825-971a-36c348e2a918\" (UID: \"34a9bdea-8dd1-4825-971a-36c348e2a918\") " Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.556900 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzznz\" (UniqueName: \"kubernetes.io/projected/34a9bdea-8dd1-4825-971a-36c348e2a918-kube-api-access-vzznz\") pod \"34a9bdea-8dd1-4825-971a-36c348e2a918\" (UID: \"34a9bdea-8dd1-4825-971a-36c348e2a918\") " Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.556962 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34a9bdea-8dd1-4825-971a-36c348e2a918-logs\") pod \"34a9bdea-8dd1-4825-971a-36c348e2a918\" (UID: \"34a9bdea-8dd1-4825-971a-36c348e2a918\") " Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.557834 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34a9bdea-8dd1-4825-971a-36c348e2a918-logs" (OuterVolumeSpecName: "logs") pod "34a9bdea-8dd1-4825-971a-36c348e2a918" (UID: "34a9bdea-8dd1-4825-971a-36c348e2a918"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.564069 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34a9bdea-8dd1-4825-971a-36c348e2a918-kube-api-access-vzznz" (OuterVolumeSpecName: "kube-api-access-vzznz") pod "34a9bdea-8dd1-4825-971a-36c348e2a918" (UID: "34a9bdea-8dd1-4825-971a-36c348e2a918"). InnerVolumeSpecName "kube-api-access-vzznz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.594491 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34a9bdea-8dd1-4825-971a-36c348e2a918-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34a9bdea-8dd1-4825-971a-36c348e2a918" (UID: "34a9bdea-8dd1-4825-971a-36c348e2a918"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.596339 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34a9bdea-8dd1-4825-971a-36c348e2a918-config-data" (OuterVolumeSpecName: "config-data") pod "34a9bdea-8dd1-4825-971a-36c348e2a918" (UID: "34a9bdea-8dd1-4825-971a-36c348e2a918"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.660032 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzznz\" (UniqueName: \"kubernetes.io/projected/34a9bdea-8dd1-4825-971a-36c348e2a918-kube-api-access-vzznz\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.660103 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34a9bdea-8dd1-4825-971a-36c348e2a918-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.660123 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34a9bdea-8dd1-4825-971a-36c348e2a918-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.660141 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34a9bdea-8dd1-4825-971a-36c348e2a918-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.914167 4932 generic.go:334] "Generic (PLEG): container finished" podID="34a9bdea-8dd1-4825-971a-36c348e2a918" containerID="a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69" exitCode=0 Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.914214 4932 generic.go:334] "Generic (PLEG): container finished" podID="34a9bdea-8dd1-4825-971a-36c348e2a918" containerID="3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971" exitCode=143 Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.914231 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34a9bdea-8dd1-4825-971a-36c348e2a918","Type":"ContainerDied","Data":"a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69"} Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.914284 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34a9bdea-8dd1-4825-971a-36c348e2a918","Type":"ContainerDied","Data":"3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971"} Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.914299 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34a9bdea-8dd1-4825-971a-36c348e2a918","Type":"ContainerDied","Data":"a03c13b667e70bffdfe5ae8206b4073cc9d064e02a2aa3bfd907faed67753e61"} Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.914320 4932 scope.go:117] "RemoveContainer" containerID="a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69" Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.915410 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.962985 4932 scope.go:117] "RemoveContainer" containerID="3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971" Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.963613 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:55:56 crc kubenswrapper[4932]: I0218 19:55:56.999308 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.009117 4932 scope.go:117] "RemoveContainer" containerID="a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69" Feb 18 19:55:57 crc kubenswrapper[4932]: E0218 19:55:57.010054 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69\": container with ID starting with a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69 not found: ID does not exist" containerID="a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.010169 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69"} err="failed to get container status \"a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69\": rpc error: code = NotFound desc = could not find container \"a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69\": container with ID starting with a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69 not found: ID does not exist" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.010293 4932 scope.go:117] "RemoveContainer" containerID="3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.012524 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:55:57 crc kubenswrapper[4932]: E0218 19:55:57.012782 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971\": container with ID starting with 3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971 not found: ID does not exist" containerID="3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.012982 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971"} err="failed to get container status \"3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971\": rpc error: code = NotFound desc = could not find container \"3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971\": container with ID starting with 3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971 not found: ID does not exist" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.013116 4932 scope.go:117] "RemoveContainer" containerID="a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69" Feb 18 19:55:57 crc kubenswrapper[4932]: E0218 19:55:57.013265 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34a9bdea-8dd1-4825-971a-36c348e2a918" containerName="nova-metadata-log" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.013319 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a9bdea-8dd1-4825-971a-36c348e2a918" containerName="nova-metadata-log" Feb 18 19:55:57 crc kubenswrapper[4932]: E0218 19:55:57.013370 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34a9bdea-8dd1-4825-971a-36c348e2a918" containerName="nova-metadata-metadata" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.013382 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a9bdea-8dd1-4825-971a-36c348e2a918" containerName="nova-metadata-metadata" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.013638 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69"} err="failed to get container status \"a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69\": rpc error: code = NotFound desc = could not find container \"a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69\": container with ID starting with a38e6d84ed93d95a71871886a3ff7e7a880e65c05bf456981f9fd96ef2264c69 not found: ID does not exist" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.013664 4932 scope.go:117] "RemoveContainer" containerID="3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.013686 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="34a9bdea-8dd1-4825-971a-36c348e2a918" containerName="nova-metadata-log" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.013744 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="34a9bdea-8dd1-4825-971a-36c348e2a918" containerName="nova-metadata-metadata" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.013998 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971"} err="failed to get container status \"3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971\": rpc error: code = NotFound desc = could not find container \"3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971\": container with ID starting with 3869bc2145866fc0f9b06dba62bd05f82869d50a18b1ee673212c63f4aa3f971 not found: ID does not exist" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.015369 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.025102 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.026162 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.081983 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-logs\") pod \"nova-metadata-0\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " pod="openstack/nova-metadata-0" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.082222 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-config-data\") pod \"nova-metadata-0\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " pod="openstack/nova-metadata-0" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.082269 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " pod="openstack/nova-metadata-0" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.082391 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvdwg\" (UniqueName: \"kubernetes.io/projected/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-kube-api-access-bvdwg\") pod \"nova-metadata-0\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " pod="openstack/nova-metadata-0" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.082781 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " pod="openstack/nova-metadata-0" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.084759 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.185505 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-config-data\") pod \"nova-metadata-0\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " pod="openstack/nova-metadata-0" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.185543 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " pod="openstack/nova-metadata-0" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.185595 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvdwg\" (UniqueName: \"kubernetes.io/projected/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-kube-api-access-bvdwg\") pod \"nova-metadata-0\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " pod="openstack/nova-metadata-0" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.185661 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " pod="openstack/nova-metadata-0" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.185693 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-logs\") pod \"nova-metadata-0\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " pod="openstack/nova-metadata-0" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.186074 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-logs\") pod \"nova-metadata-0\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " pod="openstack/nova-metadata-0" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.191116 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " pod="openstack/nova-metadata-0" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.197691 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.197938 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.201642 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34a9bdea-8dd1-4825-971a-36c348e2a918" path="/var/lib/kubelet/pods/34a9bdea-8dd1-4825-971a-36c348e2a918/volumes" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.211155 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " pod="openstack/nova-metadata-0" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.211932 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-config-data\") pod \"nova-metadata-0\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " pod="openstack/nova-metadata-0" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.244617 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvdwg\" (UniqueName: \"kubernetes.io/projected/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-kube-api-access-bvdwg\") pod \"nova-metadata-0\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " pod="openstack/nova-metadata-0" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.356674 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.606423 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.606742 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.844076 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:55:57 crc kubenswrapper[4932]: I0218 19:55:57.959210 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad","Type":"ContainerStarted","Data":"ac08c2a96822adae87e627ab8c6ab7ba89e03640acee4a44553d726b259966e3"} Feb 18 19:55:58 crc kubenswrapper[4932]: I0218 19:55:58.975278 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad","Type":"ContainerStarted","Data":"321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7"} Feb 18 19:55:58 crc kubenswrapper[4932]: I0218 19:55:58.975696 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad","Type":"ContainerStarted","Data":"bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505"} Feb 18 19:55:59 crc kubenswrapper[4932]: I0218 19:55:59.000737 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.000714474 podStartE2EDuration="3.000714474s" podCreationTimestamp="2026-02-18 19:55:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:55:59.000708214 +0000 UTC m=+1322.582663069" watchObservedRunningTime="2026-02-18 19:55:59.000714474 +0000 UTC m=+1322.582669329" Feb 18 19:55:59 crc kubenswrapper[4932]: I0218 19:55:59.966025 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:00 crc kubenswrapper[4932]: I0218 19:56:00.126881 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 19:56:00 crc kubenswrapper[4932]: I0218 19:56:00.126991 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 19:56:00 crc kubenswrapper[4932]: I0218 19:56:00.165400 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 19:56:00 crc kubenswrapper[4932]: I0218 19:56:00.362550 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:56:00 crc kubenswrapper[4932]: I0218 19:56:00.362641 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:56:00 crc kubenswrapper[4932]: I0218 19:56:00.427482 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:56:00 crc kubenswrapper[4932]: I0218 19:56:00.507610 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-855cb46c75-kwghr"] Feb 18 19:56:00 crc kubenswrapper[4932]: I0218 19:56:00.507852 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-855cb46c75-kwghr" podUID="ebab9a68-9ab1-4d04-84ec-9f54b1e6e616" containerName="dnsmasq-dns" containerID="cri-o://2dd4d65476d1505ac595577a77e37ccd6902dc5b61d39daf8b0813fba6426e5c" gracePeriod=10 Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.003725 4932 generic.go:334] "Generic (PLEG): container finished" podID="ebab9a68-9ab1-4d04-84ec-9f54b1e6e616" containerID="2dd4d65476d1505ac595577a77e37ccd6902dc5b61d39daf8b0813fba6426e5c" exitCode=0 Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.003832 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-855cb46c75-kwghr" event={"ID":"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616","Type":"ContainerDied","Data":"2dd4d65476d1505ac595577a77e37ccd6902dc5b61d39daf8b0813fba6426e5c"} Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.004367 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-855cb46c75-kwghr" event={"ID":"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616","Type":"ContainerDied","Data":"abf4fe1aeef8ebb3bf6d40f6b972486e6ef67f658fa19698f1bd32267dd142b9"} Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.004463 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abf4fe1aeef8ebb3bf6d40f6b972486e6ef67f658fa19698f1bd32267dd142b9" Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.006263 4932 generic.go:334] "Generic (PLEG): container finished" podID="6473c7ac-af7d-4556-aa86-28aabc85694a" containerID="7ff7e9bf05a2ba3237ddc130003a316b61a512ddd8b5c858384cd739b41a1cfd" exitCode=0 Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.006310 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xlzdb" event={"ID":"6473c7ac-af7d-4556-aa86-28aabc85694a","Type":"ContainerDied","Data":"7ff7e9bf05a2ba3237ddc130003a316b61a512ddd8b5c858384cd739b41a1cfd"} Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.039461 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.046793 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.077460 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-dns-svc\") pod \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.077515 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9qmk\" (UniqueName: \"kubernetes.io/projected/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-kube-api-access-b9qmk\") pod \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.077562 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-ovsdbserver-nb\") pod \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.077600 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-dns-swift-storage-0\") pod \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.077657 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-ovsdbserver-sb\") pod \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.077826 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-config\") pod \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\" (UID: \"ebab9a68-9ab1-4d04-84ec-9f54b1e6e616\") " Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.098409 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-kube-api-access-b9qmk" (OuterVolumeSpecName: "kube-api-access-b9qmk") pod "ebab9a68-9ab1-4d04-84ec-9f54b1e6e616" (UID: "ebab9a68-9ab1-4d04-84ec-9f54b1e6e616"). InnerVolumeSpecName "kube-api-access-b9qmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.183842 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9qmk\" (UniqueName: \"kubernetes.io/projected/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-kube-api-access-b9qmk\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.278900 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-config" (OuterVolumeSpecName: "config") pod "ebab9a68-9ab1-4d04-84ec-9f54b1e6e616" (UID: "ebab9a68-9ab1-4d04-84ec-9f54b1e6e616"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.285618 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.286675 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ebab9a68-9ab1-4d04-84ec-9f54b1e6e616" (UID: "ebab9a68-9ab1-4d04-84ec-9f54b1e6e616"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.297848 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ebab9a68-9ab1-4d04-84ec-9f54b1e6e616" (UID: "ebab9a68-9ab1-4d04-84ec-9f54b1e6e616"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.310898 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ebab9a68-9ab1-4d04-84ec-9f54b1e6e616" (UID: "ebab9a68-9ab1-4d04-84ec-9f54b1e6e616"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.317137 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ebab9a68-9ab1-4d04-84ec-9f54b1e6e616" (UID: "ebab9a68-9ab1-4d04-84ec-9f54b1e6e616"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.387855 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.387914 4932 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.387925 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.387937 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.446351 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a445a66f-1685-4542-89c3-012fef147a76" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.211:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:56:01 crc kubenswrapper[4932]: I0218 19:56:01.446361 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a445a66f-1685-4542-89c3-012fef147a76" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.211:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.020547 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-855cb46c75-kwghr" Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.065800 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-855cb46c75-kwghr"] Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.090726 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-855cb46c75-kwghr"] Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.357405 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.357684 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.506723 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xlzdb" Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.618822 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76srz\" (UniqueName: \"kubernetes.io/projected/6473c7ac-af7d-4556-aa86-28aabc85694a-kube-api-access-76srz\") pod \"6473c7ac-af7d-4556-aa86-28aabc85694a\" (UID: \"6473c7ac-af7d-4556-aa86-28aabc85694a\") " Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.618864 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-combined-ca-bundle\") pod \"6473c7ac-af7d-4556-aa86-28aabc85694a\" (UID: \"6473c7ac-af7d-4556-aa86-28aabc85694a\") " Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.618994 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-config-data\") pod \"6473c7ac-af7d-4556-aa86-28aabc85694a\" (UID: \"6473c7ac-af7d-4556-aa86-28aabc85694a\") " Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.619037 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-scripts\") pod \"6473c7ac-af7d-4556-aa86-28aabc85694a\" (UID: \"6473c7ac-af7d-4556-aa86-28aabc85694a\") " Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.627602 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6473c7ac-af7d-4556-aa86-28aabc85694a-kube-api-access-76srz" (OuterVolumeSpecName: "kube-api-access-76srz") pod "6473c7ac-af7d-4556-aa86-28aabc85694a" (UID: "6473c7ac-af7d-4556-aa86-28aabc85694a"). InnerVolumeSpecName "kube-api-access-76srz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.628784 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-scripts" (OuterVolumeSpecName: "scripts") pod "6473c7ac-af7d-4556-aa86-28aabc85694a" (UID: "6473c7ac-af7d-4556-aa86-28aabc85694a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.647847 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6473c7ac-af7d-4556-aa86-28aabc85694a" (UID: "6473c7ac-af7d-4556-aa86-28aabc85694a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.659324 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-config-data" (OuterVolumeSpecName: "config-data") pod "6473c7ac-af7d-4556-aa86-28aabc85694a" (UID: "6473c7ac-af7d-4556-aa86-28aabc85694a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.721319 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.721353 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.721366 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76srz\" (UniqueName: \"kubernetes.io/projected/6473c7ac-af7d-4556-aa86-28aabc85694a-kube-api-access-76srz\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:02 crc kubenswrapper[4932]: I0218 19:56:02.721379 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6473c7ac-af7d-4556-aa86-28aabc85694a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:03 crc kubenswrapper[4932]: I0218 19:56:03.032068 4932 generic.go:334] "Generic (PLEG): container finished" podID="5d3a07cf-a084-46a0-8ca2-830e0838d575" containerID="9bb9eedee5db3508051ad5cf9468f19b751623f5c59dfbe177da134d00b7fc1f" exitCode=0 Feb 18 19:56:03 crc kubenswrapper[4932]: I0218 19:56:03.032140 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-f756w" event={"ID":"5d3a07cf-a084-46a0-8ca2-830e0838d575","Type":"ContainerDied","Data":"9bb9eedee5db3508051ad5cf9468f19b751623f5c59dfbe177da134d00b7fc1f"} Feb 18 19:56:03 crc kubenswrapper[4932]: I0218 19:56:03.033583 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xlzdb" event={"ID":"6473c7ac-af7d-4556-aa86-28aabc85694a","Type":"ContainerDied","Data":"0726c55d4787049d77ae93d959e7adf39c17f94bb13160b743ceddf464afc9d7"} Feb 18 19:56:03 crc kubenswrapper[4932]: I0218 19:56:03.033631 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0726c55d4787049d77ae93d959e7adf39c17f94bb13160b743ceddf464afc9d7" Feb 18 19:56:03 crc kubenswrapper[4932]: I0218 19:56:03.033669 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xlzdb" Feb 18 19:56:03 crc kubenswrapper[4932]: I0218 19:56:03.193341 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebab9a68-9ab1-4d04-84ec-9f54b1e6e616" path="/var/lib/kubelet/pods/ebab9a68-9ab1-4d04-84ec-9f54b1e6e616/volumes" Feb 18 19:56:03 crc kubenswrapper[4932]: I0218 19:56:03.212938 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:56:03 crc kubenswrapper[4932]: I0218 19:56:03.213164 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a445a66f-1685-4542-89c3-012fef147a76" containerName="nova-api-log" containerID="cri-o://4d327303ff33eab8a6bbdcf1937931dc82eed58f45d98c26b2d9ee0e150ae6f4" gracePeriod=30 Feb 18 19:56:03 crc kubenswrapper[4932]: I0218 19:56:03.213636 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a445a66f-1685-4542-89c3-012fef147a76" containerName="nova-api-api" containerID="cri-o://5f7527d01c487865e2fd7be5215fbd63ac3d155a793a5a3fe14168602cf387c9" gracePeriod=30 Feb 18 19:56:03 crc kubenswrapper[4932]: I0218 19:56:03.252603 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:56:03 crc kubenswrapper[4932]: I0218 19:56:03.253300 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="3e97df52-5201-479d-aae1-ac0c36e3ea63" containerName="nova-scheduler-scheduler" containerID="cri-o://dedbe68a7a28a7582468bbdba015b74411131d55f6155a9cdd24cd82e7465bf0" gracePeriod=30 Feb 18 19:56:03 crc kubenswrapper[4932]: I0218 19:56:03.329838 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:56:03 crc kubenswrapper[4932]: I0218 19:56:03.330098 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad" containerName="nova-metadata-log" containerID="cri-o://bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505" gracePeriod=30 Feb 18 19:56:03 crc kubenswrapper[4932]: I0218 19:56:03.330658 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad" containerName="nova-metadata-metadata" containerID="cri-o://321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7" gracePeriod=30 Feb 18 19:56:03 crc kubenswrapper[4932]: I0218 19:56:03.933704 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.049220 4932 generic.go:334] "Generic (PLEG): container finished" podID="a445a66f-1685-4542-89c3-012fef147a76" containerID="4d327303ff33eab8a6bbdcf1937931dc82eed58f45d98c26b2d9ee0e150ae6f4" exitCode=143 Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.049289 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a445a66f-1685-4542-89c3-012fef147a76","Type":"ContainerDied","Data":"4d327303ff33eab8a6bbdcf1937931dc82eed58f45d98c26b2d9ee0e150ae6f4"} Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.050229 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvdwg\" (UniqueName: \"kubernetes.io/projected/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-kube-api-access-bvdwg\") pod \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.050279 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-config-data\") pod \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.050383 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-logs\") pod \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.050412 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-nova-metadata-tls-certs\") pod \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.050454 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-combined-ca-bundle\") pod \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\" (UID: \"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad\") " Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.051844 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-logs" (OuterVolumeSpecName: "logs") pod "8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad" (UID: "8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.053105 4932 generic.go:334] "Generic (PLEG): container finished" podID="8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad" containerID="321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7" exitCode=0 Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.053134 4932 generic.go:334] "Generic (PLEG): container finished" podID="8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad" containerID="bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505" exitCode=143 Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.053619 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.054332 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad","Type":"ContainerDied","Data":"321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7"} Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.054382 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad","Type":"ContainerDied","Data":"bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505"} Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.054397 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad","Type":"ContainerDied","Data":"ac08c2a96822adae87e627ab8c6ab7ba89e03640acee4a44553d726b259966e3"} Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.054416 4932 scope.go:117] "RemoveContainer" containerID="321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.056511 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-kube-api-access-bvdwg" (OuterVolumeSpecName: "kube-api-access-bvdwg") pod "8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad" (UID: "8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad"). InnerVolumeSpecName "kube-api-access-bvdwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.086318 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad" (UID: "8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.099514 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-config-data" (OuterVolumeSpecName: "config-data") pod "8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad" (UID: "8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.111732 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad" (UID: "8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.152573 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.152610 4932 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.152627 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.152638 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvdwg\" (UniqueName: \"kubernetes.io/projected/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-kube-api-access-bvdwg\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.152648 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.195201 4932 scope.go:117] "RemoveContainer" containerID="bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.224400 4932 scope.go:117] "RemoveContainer" containerID="321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7" Feb 18 19:56:04 crc kubenswrapper[4932]: E0218 19:56:04.234724 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7\": container with ID starting with 321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7 not found: ID does not exist" containerID="321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.234781 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7"} err="failed to get container status \"321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7\": rpc error: code = NotFound desc = could not find container \"321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7\": container with ID starting with 321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7 not found: ID does not exist" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.234806 4932 scope.go:117] "RemoveContainer" containerID="bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505" Feb 18 19:56:04 crc kubenswrapper[4932]: E0218 19:56:04.237244 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505\": container with ID starting with bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505 not found: ID does not exist" containerID="bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.237298 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505"} err="failed to get container status \"bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505\": rpc error: code = NotFound desc = could not find container \"bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505\": container with ID starting with bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505 not found: ID does not exist" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.237336 4932 scope.go:117] "RemoveContainer" containerID="321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.238047 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7"} err="failed to get container status \"321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7\": rpc error: code = NotFound desc = could not find container \"321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7\": container with ID starting with 321959de1eb66366e304f9ff470d56b07fdc531922bb847063a4b50f221f9ed7 not found: ID does not exist" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.238089 4932 scope.go:117] "RemoveContainer" containerID="bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.238414 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505"} err="failed to get container status \"bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505\": rpc error: code = NotFound desc = could not find container \"bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505\": container with ID starting with bb8a55f6596cced04212e76b649c39bd0c96dba377b68386f77f77a085f48505 not found: ID does not exist" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.440514 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-f756w" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.452605 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.462038 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.490243 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:56:04 crc kubenswrapper[4932]: E0218 19:56:04.490805 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad" containerName="nova-metadata-log" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.490832 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad" containerName="nova-metadata-log" Feb 18 19:56:04 crc kubenswrapper[4932]: E0218 19:56:04.490859 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebab9a68-9ab1-4d04-84ec-9f54b1e6e616" containerName="init" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.490867 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebab9a68-9ab1-4d04-84ec-9f54b1e6e616" containerName="init" Feb 18 19:56:04 crc kubenswrapper[4932]: E0218 19:56:04.490885 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d3a07cf-a084-46a0-8ca2-830e0838d575" containerName="nova-cell1-conductor-db-sync" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.490892 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d3a07cf-a084-46a0-8ca2-830e0838d575" containerName="nova-cell1-conductor-db-sync" Feb 18 19:56:04 crc kubenswrapper[4932]: E0218 19:56:04.490908 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6473c7ac-af7d-4556-aa86-28aabc85694a" containerName="nova-manage" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.490916 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="6473c7ac-af7d-4556-aa86-28aabc85694a" containerName="nova-manage" Feb 18 19:56:04 crc kubenswrapper[4932]: E0218 19:56:04.490923 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad" containerName="nova-metadata-metadata" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.490930 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad" containerName="nova-metadata-metadata" Feb 18 19:56:04 crc kubenswrapper[4932]: E0218 19:56:04.490952 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebab9a68-9ab1-4d04-84ec-9f54b1e6e616" containerName="dnsmasq-dns" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.490959 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebab9a68-9ab1-4d04-84ec-9f54b1e6e616" containerName="dnsmasq-dns" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.491214 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad" containerName="nova-metadata-metadata" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.491228 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebab9a68-9ab1-4d04-84ec-9f54b1e6e616" containerName="dnsmasq-dns" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.491240 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d3a07cf-a084-46a0-8ca2-830e0838d575" containerName="nova-cell1-conductor-db-sync" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.491251 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad" containerName="nova-metadata-log" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.491266 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="6473c7ac-af7d-4556-aa86-28aabc85694a" containerName="nova-manage" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.494578 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.498991 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.509944 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.509963 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.574767 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-scripts\") pod \"5d3a07cf-a084-46a0-8ca2-830e0838d575\" (UID: \"5d3a07cf-a084-46a0-8ca2-830e0838d575\") " Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.575208 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-config-data\") pod \"5d3a07cf-a084-46a0-8ca2-830e0838d575\" (UID: \"5d3a07cf-a084-46a0-8ca2-830e0838d575\") " Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.575321 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzw4x\" (UniqueName: \"kubernetes.io/projected/5d3a07cf-a084-46a0-8ca2-830e0838d575-kube-api-access-bzw4x\") pod \"5d3a07cf-a084-46a0-8ca2-830e0838d575\" (UID: \"5d3a07cf-a084-46a0-8ca2-830e0838d575\") " Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.575364 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-combined-ca-bundle\") pod \"5d3a07cf-a084-46a0-8ca2-830e0838d575\" (UID: \"5d3a07cf-a084-46a0-8ca2-830e0838d575\") " Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.575662 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.575685 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-config-data\") pod \"nova-metadata-0\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.575733 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f60cae27-f16b-4874-800f-f94fc2ce849f-logs\") pod \"nova-metadata-0\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.575754 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.575848 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frsfw\" (UniqueName: \"kubernetes.io/projected/f60cae27-f16b-4874-800f-f94fc2ce849f-kube-api-access-frsfw\") pod \"nova-metadata-0\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.584483 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d3a07cf-a084-46a0-8ca2-830e0838d575-kube-api-access-bzw4x" (OuterVolumeSpecName: "kube-api-access-bzw4x") pod "5d3a07cf-a084-46a0-8ca2-830e0838d575" (UID: "5d3a07cf-a084-46a0-8ca2-830e0838d575"). InnerVolumeSpecName "kube-api-access-bzw4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.590288 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-scripts" (OuterVolumeSpecName: "scripts") pod "5d3a07cf-a084-46a0-8ca2-830e0838d575" (UID: "5d3a07cf-a084-46a0-8ca2-830e0838d575"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.642613 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-config-data" (OuterVolumeSpecName: "config-data") pod "5d3a07cf-a084-46a0-8ca2-830e0838d575" (UID: "5d3a07cf-a084-46a0-8ca2-830e0838d575"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.643085 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d3a07cf-a084-46a0-8ca2-830e0838d575" (UID: "5d3a07cf-a084-46a0-8ca2-830e0838d575"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.677205 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.677274 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-config-data\") pod \"nova-metadata-0\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.677325 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f60cae27-f16b-4874-800f-f94fc2ce849f-logs\") pod \"nova-metadata-0\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.677349 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.677426 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frsfw\" (UniqueName: \"kubernetes.io/projected/f60cae27-f16b-4874-800f-f94fc2ce849f-kube-api-access-frsfw\") pod \"nova-metadata-0\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.677517 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.677530 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzw4x\" (UniqueName: \"kubernetes.io/projected/5d3a07cf-a084-46a0-8ca2-830e0838d575-kube-api-access-bzw4x\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.677540 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.677549 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d3a07cf-a084-46a0-8ca2-830e0838d575-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.678204 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f60cae27-f16b-4874-800f-f94fc2ce849f-logs\") pod \"nova-metadata-0\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.684002 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.684797 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-config-data\") pod \"nova-metadata-0\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.685585 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.696839 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frsfw\" (UniqueName: \"kubernetes.io/projected/f60cae27-f16b-4874-800f-f94fc2ce849f-kube-api-access-frsfw\") pod \"nova-metadata-0\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.826979 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:56:04 crc kubenswrapper[4932]: I0218 19:56:04.836571 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.076092 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-f756w" event={"ID":"5d3a07cf-a084-46a0-8ca2-830e0838d575","Type":"ContainerDied","Data":"1f840d02307e281dd77d4c46053f73ca0e900cd59304c1e3d8da776e2b0a46e0"} Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.076389 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f840d02307e281dd77d4c46053f73ca0e900cd59304c1e3d8da776e2b0a46e0" Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.076248 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-f756w" Feb 18 19:56:05 crc kubenswrapper[4932]: E0218 19:56:05.145120 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dedbe68a7a28a7582468bbdba015b74411131d55f6155a9cdd24cd82e7465bf0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 19:56:05 crc kubenswrapper[4932]: E0218 19:56:05.147783 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dedbe68a7a28a7582468bbdba015b74411131d55f6155a9cdd24cd82e7465bf0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 19:56:05 crc kubenswrapper[4932]: E0218 19:56:05.152972 4932 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dedbe68a7a28a7582468bbdba015b74411131d55f6155a9cdd24cd82e7465bf0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 19:56:05 crc kubenswrapper[4932]: E0218 19:56:05.153062 4932 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="3e97df52-5201-479d-aae1-ac0c36e3ea63" containerName="nova-scheduler-scheduler" Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.166231 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.168311 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.173656 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.191098 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad" path="/var/lib/kubelet/pods/8f5dda4b-3fa1-40ec-a4c5-6be7e84345ad/volumes" Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.191865 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.288580 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6be0a105-6011-49ed-9dd4-878f392f4b65-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6be0a105-6011-49ed-9dd4-878f392f4b65\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.288811 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6be0a105-6011-49ed-9dd4-878f392f4b65-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6be0a105-6011-49ed-9dd4-878f392f4b65\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.288916 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg4n7\" (UniqueName: \"kubernetes.io/projected/6be0a105-6011-49ed-9dd4-878f392f4b65-kube-api-access-vg4n7\") pod \"nova-cell1-conductor-0\" (UID: \"6be0a105-6011-49ed-9dd4-878f392f4b65\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.290160 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.390424 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6be0a105-6011-49ed-9dd4-878f392f4b65-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6be0a105-6011-49ed-9dd4-878f392f4b65\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.390852 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6be0a105-6011-49ed-9dd4-878f392f4b65-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6be0a105-6011-49ed-9dd4-878f392f4b65\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.390923 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg4n7\" (UniqueName: \"kubernetes.io/projected/6be0a105-6011-49ed-9dd4-878f392f4b65-kube-api-access-vg4n7\") pod \"nova-cell1-conductor-0\" (UID: \"6be0a105-6011-49ed-9dd4-878f392f4b65\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.398539 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6be0a105-6011-49ed-9dd4-878f392f4b65-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6be0a105-6011-49ed-9dd4-878f392f4b65\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.398747 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6be0a105-6011-49ed-9dd4-878f392f4b65-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6be0a105-6011-49ed-9dd4-878f392f4b65\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.409334 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg4n7\" (UniqueName: \"kubernetes.io/projected/6be0a105-6011-49ed-9dd4-878f392f4b65-kube-api-access-vg4n7\") pod \"nova-cell1-conductor-0\" (UID: \"6be0a105-6011-49ed-9dd4-878f392f4b65\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.492014 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 18 19:56:05 crc kubenswrapper[4932]: I0218 19:56:05.951347 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 19:56:05 crc kubenswrapper[4932]: W0218 19:56:05.963799 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6be0a105_6011_49ed_9dd4_878f392f4b65.slice/crio-2d4391a2ea2da9d4cc6a357e6ff3d5e050955bb371de923ba587adc514c3b332 WatchSource:0}: Error finding container 2d4391a2ea2da9d4cc6a357e6ff3d5e050955bb371de923ba587adc514c3b332: Status 404 returned error can't find the container with id 2d4391a2ea2da9d4cc6a357e6ff3d5e050955bb371de923ba587adc514c3b332 Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.085278 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"6be0a105-6011-49ed-9dd4-878f392f4b65","Type":"ContainerStarted","Data":"2d4391a2ea2da9d4cc6a357e6ff3d5e050955bb371de923ba587adc514c3b332"} Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.087155 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f60cae27-f16b-4874-800f-f94fc2ce849f","Type":"ContainerStarted","Data":"9bd9281f99ab075955fe8b5af26c0c780a0d2bc911ceed153986d0271a1ec7e3"} Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.087209 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f60cae27-f16b-4874-800f-f94fc2ce849f","Type":"ContainerStarted","Data":"e0182ed6367a5a181d31bfd848c8732bc890073ca07bb24e814dcf3dda3b39a9"} Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.087219 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f60cae27-f16b-4874-800f-f94fc2ce849f","Type":"ContainerStarted","Data":"6cc5ccb6f5aea2a8c8a0f89d07f1d4998a02b0b51bdac9b6c2ee10d527679c5a"} Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.113033 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.113014867 podStartE2EDuration="2.113014867s" podCreationTimestamp="2026-02-18 19:56:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:56:06.105577742 +0000 UTC m=+1329.687532607" watchObservedRunningTime="2026-02-18 19:56:06.113014867 +0000 UTC m=+1329.694969732" Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.638779 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.717248 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a445a66f-1685-4542-89c3-012fef147a76-logs\") pod \"a445a66f-1685-4542-89c3-012fef147a76\" (UID: \"a445a66f-1685-4542-89c3-012fef147a76\") " Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.717395 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsfs8\" (UniqueName: \"kubernetes.io/projected/a445a66f-1685-4542-89c3-012fef147a76-kube-api-access-xsfs8\") pod \"a445a66f-1685-4542-89c3-012fef147a76\" (UID: \"a445a66f-1685-4542-89c3-012fef147a76\") " Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.717472 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a445a66f-1685-4542-89c3-012fef147a76-combined-ca-bundle\") pod \"a445a66f-1685-4542-89c3-012fef147a76\" (UID: \"a445a66f-1685-4542-89c3-012fef147a76\") " Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.717519 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a445a66f-1685-4542-89c3-012fef147a76-config-data\") pod \"a445a66f-1685-4542-89c3-012fef147a76\" (UID: \"a445a66f-1685-4542-89c3-012fef147a76\") " Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.720647 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a445a66f-1685-4542-89c3-012fef147a76-logs" (OuterVolumeSpecName: "logs") pod "a445a66f-1685-4542-89c3-012fef147a76" (UID: "a445a66f-1685-4542-89c3-012fef147a76"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.722544 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a445a66f-1685-4542-89c3-012fef147a76-kube-api-access-xsfs8" (OuterVolumeSpecName: "kube-api-access-xsfs8") pod "a445a66f-1685-4542-89c3-012fef147a76" (UID: "a445a66f-1685-4542-89c3-012fef147a76"). InnerVolumeSpecName "kube-api-access-xsfs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.746680 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a445a66f-1685-4542-89c3-012fef147a76-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a445a66f-1685-4542-89c3-012fef147a76" (UID: "a445a66f-1685-4542-89c3-012fef147a76"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.748236 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a445a66f-1685-4542-89c3-012fef147a76-config-data" (OuterVolumeSpecName: "config-data") pod "a445a66f-1685-4542-89c3-012fef147a76" (UID: "a445a66f-1685-4542-89c3-012fef147a76"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.820100 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsfs8\" (UniqueName: \"kubernetes.io/projected/a445a66f-1685-4542-89c3-012fef147a76-kube-api-access-xsfs8\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.820128 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a445a66f-1685-4542-89c3-012fef147a76-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.820137 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a445a66f-1685-4542-89c3-012fef147a76-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:06 crc kubenswrapper[4932]: I0218 19:56:06.820147 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a445a66f-1685-4542-89c3-012fef147a76-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:07 crc kubenswrapper[4932]: I0218 19:56:07.097534 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"6be0a105-6011-49ed-9dd4-878f392f4b65","Type":"ContainerStarted","Data":"072752f253b8d8eed502eeeb97f13bf49a8cd71dffb3b05c441f747b46f0a40a"} Feb 18 19:56:07 crc kubenswrapper[4932]: I0218 19:56:07.097610 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 18 19:56:07 crc kubenswrapper[4932]: I0218 19:56:07.099410 4932 generic.go:334] "Generic (PLEG): container finished" podID="a445a66f-1685-4542-89c3-012fef147a76" containerID="5f7527d01c487865e2fd7be5215fbd63ac3d155a793a5a3fe14168602cf387c9" exitCode=0 Feb 18 19:56:07 crc kubenswrapper[4932]: I0218 19:56:07.099477 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:56:07 crc kubenswrapper[4932]: I0218 19:56:07.099511 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a445a66f-1685-4542-89c3-012fef147a76","Type":"ContainerDied","Data":"5f7527d01c487865e2fd7be5215fbd63ac3d155a793a5a3fe14168602cf387c9"} Feb 18 19:56:07 crc kubenswrapper[4932]: I0218 19:56:07.099536 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a445a66f-1685-4542-89c3-012fef147a76","Type":"ContainerDied","Data":"db9aed346e952a35a560a0f801674a02f5a8f28572c2af0ca6ba733c50ec6e31"} Feb 18 19:56:07 crc kubenswrapper[4932]: I0218 19:56:07.099555 4932 scope.go:117] "RemoveContainer" containerID="5f7527d01c487865e2fd7be5215fbd63ac3d155a793a5a3fe14168602cf387c9" Feb 18 19:56:07 crc kubenswrapper[4932]: I0218 19:56:07.120914 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.120897739 podStartE2EDuration="2.120897739s" podCreationTimestamp="2026-02-18 19:56:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:56:07.113800343 +0000 UTC m=+1330.695755188" watchObservedRunningTime="2026-02-18 19:56:07.120897739 +0000 UTC m=+1330.702852584" Feb 18 19:56:07 crc kubenswrapper[4932]: I0218 19:56:07.127117 4932 scope.go:117] "RemoveContainer" containerID="4d327303ff33eab8a6bbdcf1937931dc82eed58f45d98c26b2d9ee0e150ae6f4" Feb 18 19:56:07 crc kubenswrapper[4932]: I0218 19:56:07.235370 4932 scope.go:117] "RemoveContainer" containerID="5f7527d01c487865e2fd7be5215fbd63ac3d155a793a5a3fe14168602cf387c9" Feb 18 19:56:07 crc kubenswrapper[4932]: E0218 19:56:07.235925 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f7527d01c487865e2fd7be5215fbd63ac3d155a793a5a3fe14168602cf387c9\": container with ID starting with 5f7527d01c487865e2fd7be5215fbd63ac3d155a793a5a3fe14168602cf387c9 not found: ID does not exist" containerID="5f7527d01c487865e2fd7be5215fbd63ac3d155a793a5a3fe14168602cf387c9" Feb 18 19:56:07 crc kubenswrapper[4932]: I0218 19:56:07.235956 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f7527d01c487865e2fd7be5215fbd63ac3d155a793a5a3fe14168602cf387c9"} err="failed to get container status \"5f7527d01c487865e2fd7be5215fbd63ac3d155a793a5a3fe14168602cf387c9\": rpc error: code = NotFound desc = could not find container \"5f7527d01c487865e2fd7be5215fbd63ac3d155a793a5a3fe14168602cf387c9\": container with ID starting with 5f7527d01c487865e2fd7be5215fbd63ac3d155a793a5a3fe14168602cf387c9 not found: ID does not exist" Feb 18 19:56:07 crc kubenswrapper[4932]: I0218 19:56:07.235976 4932 scope.go:117] "RemoveContainer" containerID="4d327303ff33eab8a6bbdcf1937931dc82eed58f45d98c26b2d9ee0e150ae6f4" Feb 18 19:56:07 crc kubenswrapper[4932]: E0218 19:56:07.236450 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d327303ff33eab8a6bbdcf1937931dc82eed58f45d98c26b2d9ee0e150ae6f4\": container with ID starting with 4d327303ff33eab8a6bbdcf1937931dc82eed58f45d98c26b2d9ee0e150ae6f4 not found: ID does not exist" containerID="4d327303ff33eab8a6bbdcf1937931dc82eed58f45d98c26b2d9ee0e150ae6f4" Feb 18 19:56:07 crc kubenswrapper[4932]: I0218 19:56:07.236471 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d327303ff33eab8a6bbdcf1937931dc82eed58f45d98c26b2d9ee0e150ae6f4"} err="failed to get container status \"4d327303ff33eab8a6bbdcf1937931dc82eed58f45d98c26b2d9ee0e150ae6f4\": rpc error: code = NotFound desc = could not find container \"4d327303ff33eab8a6bbdcf1937931dc82eed58f45d98c26b2d9ee0e150ae6f4\": container with ID starting with 4d327303ff33eab8a6bbdcf1937931dc82eed58f45d98c26b2d9ee0e150ae6f4 not found: ID does not exist" Feb 18 19:56:08 crc kubenswrapper[4932]: I0218 19:56:08.119705 4932 generic.go:334] "Generic (PLEG): container finished" podID="3e97df52-5201-479d-aae1-ac0c36e3ea63" containerID="dedbe68a7a28a7582468bbdba015b74411131d55f6155a9cdd24cd82e7465bf0" exitCode=0 Feb 18 19:56:08 crc kubenswrapper[4932]: I0218 19:56:08.119771 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3e97df52-5201-479d-aae1-ac0c36e3ea63","Type":"ContainerDied","Data":"dedbe68a7a28a7582468bbdba015b74411131d55f6155a9cdd24cd82e7465bf0"} Feb 18 19:56:08 crc kubenswrapper[4932]: I0218 19:56:08.177659 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:56:08 crc kubenswrapper[4932]: I0218 19:56:08.254767 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jf96\" (UniqueName: \"kubernetes.io/projected/3e97df52-5201-479d-aae1-ac0c36e3ea63-kube-api-access-7jf96\") pod \"3e97df52-5201-479d-aae1-ac0c36e3ea63\" (UID: \"3e97df52-5201-479d-aae1-ac0c36e3ea63\") " Feb 18 19:56:08 crc kubenswrapper[4932]: I0218 19:56:08.254858 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e97df52-5201-479d-aae1-ac0c36e3ea63-combined-ca-bundle\") pod \"3e97df52-5201-479d-aae1-ac0c36e3ea63\" (UID: \"3e97df52-5201-479d-aae1-ac0c36e3ea63\") " Feb 18 19:56:08 crc kubenswrapper[4932]: I0218 19:56:08.255006 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e97df52-5201-479d-aae1-ac0c36e3ea63-config-data\") pod \"3e97df52-5201-479d-aae1-ac0c36e3ea63\" (UID: \"3e97df52-5201-479d-aae1-ac0c36e3ea63\") " Feb 18 19:56:08 crc kubenswrapper[4932]: I0218 19:56:08.260365 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e97df52-5201-479d-aae1-ac0c36e3ea63-kube-api-access-7jf96" (OuterVolumeSpecName: "kube-api-access-7jf96") pod "3e97df52-5201-479d-aae1-ac0c36e3ea63" (UID: "3e97df52-5201-479d-aae1-ac0c36e3ea63"). InnerVolumeSpecName "kube-api-access-7jf96". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:08 crc kubenswrapper[4932]: I0218 19:56:08.280550 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e97df52-5201-479d-aae1-ac0c36e3ea63-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e97df52-5201-479d-aae1-ac0c36e3ea63" (UID: "3e97df52-5201-479d-aae1-ac0c36e3ea63"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:08 crc kubenswrapper[4932]: I0218 19:56:08.290115 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e97df52-5201-479d-aae1-ac0c36e3ea63-config-data" (OuterVolumeSpecName: "config-data") pod "3e97df52-5201-479d-aae1-ac0c36e3ea63" (UID: "3e97df52-5201-479d-aae1-ac0c36e3ea63"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:08 crc kubenswrapper[4932]: I0218 19:56:08.357310 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jf96\" (UniqueName: \"kubernetes.io/projected/3e97df52-5201-479d-aae1-ac0c36e3ea63-kube-api-access-7jf96\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:08 crc kubenswrapper[4932]: I0218 19:56:08.357351 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e97df52-5201-479d-aae1-ac0c36e3ea63-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:08 crc kubenswrapper[4932]: I0218 19:56:08.357365 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e97df52-5201-479d-aae1-ac0c36e3ea63-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.133204 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3e97df52-5201-479d-aae1-ac0c36e3ea63","Type":"ContainerDied","Data":"5e68c76538cd952d6f6a3dd14aebb40e0d4b05858a3c9289e0a5ad892f731528"} Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.133651 4932 scope.go:117] "RemoveContainer" containerID="dedbe68a7a28a7582468bbdba015b74411131d55f6155a9cdd24cd82e7465bf0" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.133491 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.205983 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.224055 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.233491 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:56:09 crc kubenswrapper[4932]: E0218 19:56:09.233963 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a445a66f-1685-4542-89c3-012fef147a76" containerName="nova-api-api" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.233980 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="a445a66f-1685-4542-89c3-012fef147a76" containerName="nova-api-api" Feb 18 19:56:09 crc kubenswrapper[4932]: E0218 19:56:09.234008 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a445a66f-1685-4542-89c3-012fef147a76" containerName="nova-api-log" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.234014 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="a445a66f-1685-4542-89c3-012fef147a76" containerName="nova-api-log" Feb 18 19:56:09 crc kubenswrapper[4932]: E0218 19:56:09.234027 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e97df52-5201-479d-aae1-ac0c36e3ea63" containerName="nova-scheduler-scheduler" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.234035 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e97df52-5201-479d-aae1-ac0c36e3ea63" containerName="nova-scheduler-scheduler" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.234248 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="a445a66f-1685-4542-89c3-012fef147a76" containerName="nova-api-log" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.234266 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="a445a66f-1685-4542-89c3-012fef147a76" containerName="nova-api-api" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.234279 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e97df52-5201-479d-aae1-ac0c36e3ea63" containerName="nova-scheduler-scheduler" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.234949 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.237371 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.245482 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.276094 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73e788d9-865f-453f-bdca-1de3b96af3e7-config-data\") pod \"nova-scheduler-0\" (UID: \"73e788d9-865f-453f-bdca-1de3b96af3e7\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.276577 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chq2r\" (UniqueName: \"kubernetes.io/projected/73e788d9-865f-453f-bdca-1de3b96af3e7-kube-api-access-chq2r\") pod \"nova-scheduler-0\" (UID: \"73e788d9-865f-453f-bdca-1de3b96af3e7\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.276868 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73e788d9-865f-453f-bdca-1de3b96af3e7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"73e788d9-865f-453f-bdca-1de3b96af3e7\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.378364 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chq2r\" (UniqueName: \"kubernetes.io/projected/73e788d9-865f-453f-bdca-1de3b96af3e7-kube-api-access-chq2r\") pod \"nova-scheduler-0\" (UID: \"73e788d9-865f-453f-bdca-1de3b96af3e7\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.378458 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73e788d9-865f-453f-bdca-1de3b96af3e7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"73e788d9-865f-453f-bdca-1de3b96af3e7\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.378505 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73e788d9-865f-453f-bdca-1de3b96af3e7-config-data\") pod \"nova-scheduler-0\" (UID: \"73e788d9-865f-453f-bdca-1de3b96af3e7\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.387781 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73e788d9-865f-453f-bdca-1de3b96af3e7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"73e788d9-865f-453f-bdca-1de3b96af3e7\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.389759 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73e788d9-865f-453f-bdca-1de3b96af3e7-config-data\") pod \"nova-scheduler-0\" (UID: \"73e788d9-865f-453f-bdca-1de3b96af3e7\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.407135 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chq2r\" (UniqueName: \"kubernetes.io/projected/73e788d9-865f-453f-bdca-1de3b96af3e7-kube-api-access-chq2r\") pod \"nova-scheduler-0\" (UID: \"73e788d9-865f-453f-bdca-1de3b96af3e7\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.556948 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.677902 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.684126 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="bf2c7a4b-b600-48af-8081-cbb3c729223f" containerName="kube-state-metrics" containerID="cri-o://705b36739fa359b1a1790afbeb0506bcecaed1656227244dd9b3dc4748101648" gracePeriod=30 Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.827567 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.827610 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 19:56:09 crc kubenswrapper[4932]: I0218 19:56:09.907436 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:56:09 crc kubenswrapper[4932]: W0218 19:56:09.914452 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73e788d9_865f_453f_bdca_1de3b96af3e7.slice/crio-548cdd4f66aaff2ab06550adbc3a37a85c5a81145cf7ceddc60485f6dfd2fbce WatchSource:0}: Error finding container 548cdd4f66aaff2ab06550adbc3a37a85c5a81145cf7ceddc60485f6dfd2fbce: Status 404 returned error can't find the container with id 548cdd4f66aaff2ab06550adbc3a37a85c5a81145cf7ceddc60485f6dfd2fbce Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.093500 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.151930 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"73e788d9-865f-453f-bdca-1de3b96af3e7","Type":"ContainerStarted","Data":"e9c9b5cc67858791a5222fc4a66b9a665582e43139a0ec213ce7bb500dba90ac"} Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.151979 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"73e788d9-865f-453f-bdca-1de3b96af3e7","Type":"ContainerStarted","Data":"548cdd4f66aaff2ab06550adbc3a37a85c5a81145cf7ceddc60485f6dfd2fbce"} Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.156526 4932 generic.go:334] "Generic (PLEG): container finished" podID="bf2c7a4b-b600-48af-8081-cbb3c729223f" containerID="705b36739fa359b1a1790afbeb0506bcecaed1656227244dd9b3dc4748101648" exitCode=2 Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.156558 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.156567 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bf2c7a4b-b600-48af-8081-cbb3c729223f","Type":"ContainerDied","Data":"705b36739fa359b1a1790afbeb0506bcecaed1656227244dd9b3dc4748101648"} Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.156593 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bf2c7a4b-b600-48af-8081-cbb3c729223f","Type":"ContainerDied","Data":"e47d3e77ce83e6731fdca0338e3764007d631b786a20a291b2d3ac30da1a2204"} Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.156614 4932 scope.go:117] "RemoveContainer" containerID="705b36739fa359b1a1790afbeb0506bcecaed1656227244dd9b3dc4748101648" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.195963 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlzzn\" (UniqueName: \"kubernetes.io/projected/bf2c7a4b-b600-48af-8081-cbb3c729223f-kube-api-access-hlzzn\") pod \"bf2c7a4b-b600-48af-8081-cbb3c729223f\" (UID: \"bf2c7a4b-b600-48af-8081-cbb3c729223f\") " Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.202921 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf2c7a4b-b600-48af-8081-cbb3c729223f-kube-api-access-hlzzn" (OuterVolumeSpecName: "kube-api-access-hlzzn") pod "bf2c7a4b-b600-48af-8081-cbb3c729223f" (UID: "bf2c7a4b-b600-48af-8081-cbb3c729223f"). InnerVolumeSpecName "kube-api-access-hlzzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.208685 4932 scope.go:117] "RemoveContainer" containerID="705b36739fa359b1a1790afbeb0506bcecaed1656227244dd9b3dc4748101648" Feb 18 19:56:10 crc kubenswrapper[4932]: E0218 19:56:10.211374 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"705b36739fa359b1a1790afbeb0506bcecaed1656227244dd9b3dc4748101648\": container with ID starting with 705b36739fa359b1a1790afbeb0506bcecaed1656227244dd9b3dc4748101648 not found: ID does not exist" containerID="705b36739fa359b1a1790afbeb0506bcecaed1656227244dd9b3dc4748101648" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.211437 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"705b36739fa359b1a1790afbeb0506bcecaed1656227244dd9b3dc4748101648"} err="failed to get container status \"705b36739fa359b1a1790afbeb0506bcecaed1656227244dd9b3dc4748101648\": rpc error: code = NotFound desc = could not find container \"705b36739fa359b1a1790afbeb0506bcecaed1656227244dd9b3dc4748101648\": container with ID starting with 705b36739fa359b1a1790afbeb0506bcecaed1656227244dd9b3dc4748101648 not found: ID does not exist" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.304167 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlzzn\" (UniqueName: \"kubernetes.io/projected/bf2c7a4b-b600-48af-8081-cbb3c729223f-kube-api-access-hlzzn\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.482988 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.482946807 podStartE2EDuration="1.482946807s" podCreationTimestamp="2026-02-18 19:56:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:56:10.174380956 +0000 UTC m=+1333.756335821" watchObservedRunningTime="2026-02-18 19:56:10.482946807 +0000 UTC m=+1334.064901642" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.491551 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.502371 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.513947 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:56:10 crc kubenswrapper[4932]: E0218 19:56:10.514386 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf2c7a4b-b600-48af-8081-cbb3c729223f" containerName="kube-state-metrics" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.514403 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf2c7a4b-b600-48af-8081-cbb3c729223f" containerName="kube-state-metrics" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.514761 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf2c7a4b-b600-48af-8081-cbb3c729223f" containerName="kube-state-metrics" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.515418 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.521160 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.521299 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.534949 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.611953 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcf9t\" (UniqueName: \"kubernetes.io/projected/469132ad-f7a9-4208-8f20-42f72f6c6436-kube-api-access-rcf9t\") pod \"kube-state-metrics-0\" (UID: \"469132ad-f7a9-4208-8f20-42f72f6c6436\") " pod="openstack/kube-state-metrics-0" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.612029 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/469132ad-f7a9-4208-8f20-42f72f6c6436-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"469132ad-f7a9-4208-8f20-42f72f6c6436\") " pod="openstack/kube-state-metrics-0" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.612070 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/469132ad-f7a9-4208-8f20-42f72f6c6436-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"469132ad-f7a9-4208-8f20-42f72f6c6436\") " pod="openstack/kube-state-metrics-0" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.612298 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/469132ad-f7a9-4208-8f20-42f72f6c6436-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"469132ad-f7a9-4208-8f20-42f72f6c6436\") " pod="openstack/kube-state-metrics-0" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.714137 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/469132ad-f7a9-4208-8f20-42f72f6c6436-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"469132ad-f7a9-4208-8f20-42f72f6c6436\") " pod="openstack/kube-state-metrics-0" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.714321 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcf9t\" (UniqueName: \"kubernetes.io/projected/469132ad-f7a9-4208-8f20-42f72f6c6436-kube-api-access-rcf9t\") pod \"kube-state-metrics-0\" (UID: \"469132ad-f7a9-4208-8f20-42f72f6c6436\") " pod="openstack/kube-state-metrics-0" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.714387 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/469132ad-f7a9-4208-8f20-42f72f6c6436-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"469132ad-f7a9-4208-8f20-42f72f6c6436\") " pod="openstack/kube-state-metrics-0" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.714433 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/469132ad-f7a9-4208-8f20-42f72f6c6436-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"469132ad-f7a9-4208-8f20-42f72f6c6436\") " pod="openstack/kube-state-metrics-0" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.719390 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/469132ad-f7a9-4208-8f20-42f72f6c6436-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"469132ad-f7a9-4208-8f20-42f72f6c6436\") " pod="openstack/kube-state-metrics-0" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.727661 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/469132ad-f7a9-4208-8f20-42f72f6c6436-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"469132ad-f7a9-4208-8f20-42f72f6c6436\") " pod="openstack/kube-state-metrics-0" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.728264 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/469132ad-f7a9-4208-8f20-42f72f6c6436-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"469132ad-f7a9-4208-8f20-42f72f6c6436\") " pod="openstack/kube-state-metrics-0" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.736364 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcf9t\" (UniqueName: \"kubernetes.io/projected/469132ad-f7a9-4208-8f20-42f72f6c6436-kube-api-access-rcf9t\") pod \"kube-state-metrics-0\" (UID: \"469132ad-f7a9-4208-8f20-42f72f6c6436\") " pod="openstack/kube-state-metrics-0" Feb 18 19:56:10 crc kubenswrapper[4932]: I0218 19:56:10.833393 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:56:11 crc kubenswrapper[4932]: I0218 19:56:11.190921 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e97df52-5201-479d-aae1-ac0c36e3ea63" path="/var/lib/kubelet/pods/3e97df52-5201-479d-aae1-ac0c36e3ea63/volumes" Feb 18 19:56:11 crc kubenswrapper[4932]: I0218 19:56:11.191767 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf2c7a4b-b600-48af-8081-cbb3c729223f" path="/var/lib/kubelet/pods/bf2c7a4b-b600-48af-8081-cbb3c729223f/volumes" Feb 18 19:56:11 crc kubenswrapper[4932]: W0218 19:56:11.345198 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod469132ad_f7a9_4208_8f20_42f72f6c6436.slice/crio-2369a1f8d602ae9891dcc8de26b0ef5a4e6faeeefe853a76f5cff2995c77e90b WatchSource:0}: Error finding container 2369a1f8d602ae9891dcc8de26b0ef5a4e6faeeefe853a76f5cff2995c77e90b: Status 404 returned error can't find the container with id 2369a1f8d602ae9891dcc8de26b0ef5a4e6faeeefe853a76f5cff2995c77e90b Feb 18 19:56:11 crc kubenswrapper[4932]: I0218 19:56:11.347805 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:56:11 crc kubenswrapper[4932]: I0218 19:56:11.669857 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:56:11 crc kubenswrapper[4932]: I0218 19:56:11.670192 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerName="ceilometer-central-agent" containerID="cri-o://a42357a04eca2427447a527f9b884286ac30d97b8bf59de7d2cd9869618e566a" gracePeriod=30 Feb 18 19:56:11 crc kubenswrapper[4932]: I0218 19:56:11.670375 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerName="ceilometer-notification-agent" containerID="cri-o://4f612b79e40b95e6fef0e37a0198be25f5c486cd3ca03eaa4c43b2840baeb770" gracePeriod=30 Feb 18 19:56:11 crc kubenswrapper[4932]: I0218 19:56:11.670577 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerName="sg-core" containerID="cri-o://2697d0543e7fe8649877a6210966590083b7e47b807f2346f64c28d10d502f59" gracePeriod=30 Feb 18 19:56:11 crc kubenswrapper[4932]: I0218 19:56:11.670665 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerName="proxy-httpd" containerID="cri-o://235fb721bf81fe59350072741d94ffeb2cb2dcf4dda7a36192f0baba9a50695d" gracePeriod=30 Feb 18 19:56:12 crc kubenswrapper[4932]: I0218 19:56:12.192380 4932 generic.go:334] "Generic (PLEG): container finished" podID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerID="235fb721bf81fe59350072741d94ffeb2cb2dcf4dda7a36192f0baba9a50695d" exitCode=0 Feb 18 19:56:12 crc kubenswrapper[4932]: I0218 19:56:12.192849 4932 generic.go:334] "Generic (PLEG): container finished" podID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerID="2697d0543e7fe8649877a6210966590083b7e47b807f2346f64c28d10d502f59" exitCode=2 Feb 18 19:56:12 crc kubenswrapper[4932]: I0218 19:56:12.192862 4932 generic.go:334] "Generic (PLEG): container finished" podID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerID="a42357a04eca2427447a527f9b884286ac30d97b8bf59de7d2cd9869618e566a" exitCode=0 Feb 18 19:56:12 crc kubenswrapper[4932]: I0218 19:56:12.192909 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f22c0acb-8789-4ba1-8e45-8e456165db99","Type":"ContainerDied","Data":"235fb721bf81fe59350072741d94ffeb2cb2dcf4dda7a36192f0baba9a50695d"} Feb 18 19:56:12 crc kubenswrapper[4932]: I0218 19:56:12.192938 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f22c0acb-8789-4ba1-8e45-8e456165db99","Type":"ContainerDied","Data":"2697d0543e7fe8649877a6210966590083b7e47b807f2346f64c28d10d502f59"} Feb 18 19:56:12 crc kubenswrapper[4932]: I0218 19:56:12.192951 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f22c0acb-8789-4ba1-8e45-8e456165db99","Type":"ContainerDied","Data":"a42357a04eca2427447a527f9b884286ac30d97b8bf59de7d2cd9869618e566a"} Feb 18 19:56:12 crc kubenswrapper[4932]: I0218 19:56:12.195529 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"469132ad-f7a9-4208-8f20-42f72f6c6436","Type":"ContainerStarted","Data":"8b3baf41dbbeb5f78bd2df0e9a6349a73b0cf1bfed4ce521e634c36c19ea7208"} Feb 18 19:56:12 crc kubenswrapper[4932]: I0218 19:56:12.195562 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"469132ad-f7a9-4208-8f20-42f72f6c6436","Type":"ContainerStarted","Data":"2369a1f8d602ae9891dcc8de26b0ef5a4e6faeeefe853a76f5cff2995c77e90b"} Feb 18 19:56:12 crc kubenswrapper[4932]: I0218 19:56:12.196768 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 18 19:56:12 crc kubenswrapper[4932]: I0218 19:56:12.222582 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.714197388 podStartE2EDuration="2.222557334s" podCreationTimestamp="2026-02-18 19:56:10 +0000 UTC" firstStartedPulling="2026-02-18 19:56:11.348794367 +0000 UTC m=+1334.930749212" lastFinishedPulling="2026-02-18 19:56:11.857154303 +0000 UTC m=+1335.439109158" observedRunningTime="2026-02-18 19:56:12.211049708 +0000 UTC m=+1335.793004573" watchObservedRunningTime="2026-02-18 19:56:12.222557334 +0000 UTC m=+1335.804512179" Feb 18 19:56:14 crc kubenswrapper[4932]: I0218 19:56:14.557299 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 19:56:14 crc kubenswrapper[4932]: I0218 19:56:14.828165 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 19:56:14 crc kubenswrapper[4932]: I0218 19:56:14.828236 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 19:56:15 crc kubenswrapper[4932]: I0218 19:56:15.532352 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 18 19:56:15 crc kubenswrapper[4932]: I0218 19:56:15.840406 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f60cae27-f16b-4874-800f-f94fc2ce849f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.215:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:56:15 crc kubenswrapper[4932]: I0218 19:56:15.840432 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f60cae27-f16b-4874-800f-f94fc2ce849f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.215:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.271847 4932 generic.go:334] "Generic (PLEG): container finished" podID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerID="4f612b79e40b95e6fef0e37a0198be25f5c486cd3ca03eaa4c43b2840baeb770" exitCode=0 Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.272505 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f22c0acb-8789-4ba1-8e45-8e456165db99","Type":"ContainerDied","Data":"4f612b79e40b95e6fef0e37a0198be25f5c486cd3ca03eaa4c43b2840baeb770"} Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.500029 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.603818 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-scripts\") pod \"f22c0acb-8789-4ba1-8e45-8e456165db99\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.605011 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-sg-core-conf-yaml\") pod \"f22c0acb-8789-4ba1-8e45-8e456165db99\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.605159 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5x86\" (UniqueName: \"kubernetes.io/projected/f22c0acb-8789-4ba1-8e45-8e456165db99-kube-api-access-k5x86\") pod \"f22c0acb-8789-4ba1-8e45-8e456165db99\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.605307 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-combined-ca-bundle\") pod \"f22c0acb-8789-4ba1-8e45-8e456165db99\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.605402 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-config-data\") pod \"f22c0acb-8789-4ba1-8e45-8e456165db99\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.605506 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f22c0acb-8789-4ba1-8e45-8e456165db99-log-httpd\") pod \"f22c0acb-8789-4ba1-8e45-8e456165db99\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.605660 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f22c0acb-8789-4ba1-8e45-8e456165db99-run-httpd\") pod \"f22c0acb-8789-4ba1-8e45-8e456165db99\" (UID: \"f22c0acb-8789-4ba1-8e45-8e456165db99\") " Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.605944 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f22c0acb-8789-4ba1-8e45-8e456165db99-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f22c0acb-8789-4ba1-8e45-8e456165db99" (UID: "f22c0acb-8789-4ba1-8e45-8e456165db99"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.606032 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f22c0acb-8789-4ba1-8e45-8e456165db99-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f22c0acb-8789-4ba1-8e45-8e456165db99" (UID: "f22c0acb-8789-4ba1-8e45-8e456165db99"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.606554 4932 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f22c0acb-8789-4ba1-8e45-8e456165db99-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.606656 4932 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f22c0acb-8789-4ba1-8e45-8e456165db99-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.620328 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-scripts" (OuterVolumeSpecName: "scripts") pod "f22c0acb-8789-4ba1-8e45-8e456165db99" (UID: "f22c0acb-8789-4ba1-8e45-8e456165db99"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.620436 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f22c0acb-8789-4ba1-8e45-8e456165db99-kube-api-access-k5x86" (OuterVolumeSpecName: "kube-api-access-k5x86") pod "f22c0acb-8789-4ba1-8e45-8e456165db99" (UID: "f22c0acb-8789-4ba1-8e45-8e456165db99"). InnerVolumeSpecName "kube-api-access-k5x86". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.645379 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f22c0acb-8789-4ba1-8e45-8e456165db99" (UID: "f22c0acb-8789-4ba1-8e45-8e456165db99"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.701147 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f22c0acb-8789-4ba1-8e45-8e456165db99" (UID: "f22c0acb-8789-4ba1-8e45-8e456165db99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.706342 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-config-data" (OuterVolumeSpecName: "config-data") pod "f22c0acb-8789-4ba1-8e45-8e456165db99" (UID: "f22c0acb-8789-4ba1-8e45-8e456165db99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.709117 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.709452 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.709531 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.709619 4932 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f22c0acb-8789-4ba1-8e45-8e456165db99-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:18 crc kubenswrapper[4932]: I0218 19:56:18.709691 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5x86\" (UniqueName: \"kubernetes.io/projected/f22c0acb-8789-4ba1-8e45-8e456165db99-kube-api-access-k5x86\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.284914 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f22c0acb-8789-4ba1-8e45-8e456165db99","Type":"ContainerDied","Data":"a620663d217c47ddd4628558591f9269acb00cf7b394dcbb5dec8251391d19e8"} Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.285319 4932 scope.go:117] "RemoveContainer" containerID="235fb721bf81fe59350072741d94ffeb2cb2dcf4dda7a36192f0baba9a50695d" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.285001 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.311396 4932 scope.go:117] "RemoveContainer" containerID="2697d0543e7fe8649877a6210966590083b7e47b807f2346f64c28d10d502f59" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.321365 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.343920 4932 scope.go:117] "RemoveContainer" containerID="4f612b79e40b95e6fef0e37a0198be25f5c486cd3ca03eaa4c43b2840baeb770" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.349037 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.362341 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:56:19 crc kubenswrapper[4932]: E0218 19:56:19.362925 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerName="sg-core" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.362947 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerName="sg-core" Feb 18 19:56:19 crc kubenswrapper[4932]: E0218 19:56:19.362964 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerName="ceilometer-central-agent" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.362973 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerName="ceilometer-central-agent" Feb 18 19:56:19 crc kubenswrapper[4932]: E0218 19:56:19.362997 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerName="proxy-httpd" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.363005 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerName="proxy-httpd" Feb 18 19:56:19 crc kubenswrapper[4932]: E0218 19:56:19.363024 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerName="ceilometer-notification-agent" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.363032 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerName="ceilometer-notification-agent" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.363303 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerName="proxy-httpd" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.363321 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerName="ceilometer-notification-agent" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.363348 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerName="sg-core" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.363361 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f22c0acb-8789-4ba1-8e45-8e456165db99" containerName="ceilometer-central-agent" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.365649 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.368866 4932 scope.go:117] "RemoveContainer" containerID="a42357a04eca2427447a527f9b884286ac30d97b8bf59de7d2cd9869618e566a" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.368876 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.369070 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.371741 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.374036 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.423015 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07d7be76-f5d6-4280-8009-01c1db25ee6e-run-httpd\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.423086 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.423115 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07d7be76-f5d6-4280-8009-01c1db25ee6e-log-httpd\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.423156 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.423221 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bvgl\" (UniqueName: \"kubernetes.io/projected/07d7be76-f5d6-4280-8009-01c1db25ee6e-kube-api-access-2bvgl\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.423263 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-config-data\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.423309 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-scripts\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.423408 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.525432 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.525483 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07d7be76-f5d6-4280-8009-01c1db25ee6e-run-httpd\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.525514 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.525531 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07d7be76-f5d6-4280-8009-01c1db25ee6e-log-httpd\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.525559 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.525594 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bvgl\" (UniqueName: \"kubernetes.io/projected/07d7be76-f5d6-4280-8009-01c1db25ee6e-kube-api-access-2bvgl\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.525622 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-config-data\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.525654 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-scripts\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.526055 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07d7be76-f5d6-4280-8009-01c1db25ee6e-run-httpd\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.526380 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07d7be76-f5d6-4280-8009-01c1db25ee6e-log-httpd\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.530394 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-config-data\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.530527 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-scripts\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.530549 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.530644 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.530935 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.549561 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bvgl\" (UniqueName: \"kubernetes.io/projected/07d7be76-f5d6-4280-8009-01c1db25ee6e-kube-api-access-2bvgl\") pod \"ceilometer-0\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " pod="openstack/ceilometer-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.557192 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.592563 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 19:56:19 crc kubenswrapper[4932]: I0218 19:56:19.693802 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:56:20 crc kubenswrapper[4932]: I0218 19:56:20.184633 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:56:20 crc kubenswrapper[4932]: I0218 19:56:20.297334 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07d7be76-f5d6-4280-8009-01c1db25ee6e","Type":"ContainerStarted","Data":"bed094a29d5cc735d8b58329a9d581210c267db550c3be7eeb9923193dc084eb"} Feb 18 19:56:20 crc kubenswrapper[4932]: I0218 19:56:20.327227 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 19:56:20 crc kubenswrapper[4932]: I0218 19:56:20.845101 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 18 19:56:21 crc kubenswrapper[4932]: I0218 19:56:21.196096 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f22c0acb-8789-4ba1-8e45-8e456165db99" path="/var/lib/kubelet/pods/f22c0acb-8789-4ba1-8e45-8e456165db99/volumes" Feb 18 19:56:21 crc kubenswrapper[4932]: I0218 19:56:21.311813 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07d7be76-f5d6-4280-8009-01c1db25ee6e","Type":"ContainerStarted","Data":"beafe51f72187b97ef3ab0bafd91804056c1481e3448c26a73646bd8e29f2504"} Feb 18 19:56:21 crc kubenswrapper[4932]: I0218 19:56:21.311868 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07d7be76-f5d6-4280-8009-01c1db25ee6e","Type":"ContainerStarted","Data":"d722c93032aba6113b113eeb8354bc70c1d209a0eebc1c1a06a6503b3551ccea"} Feb 18 19:56:22 crc kubenswrapper[4932]: I0218 19:56:22.322505 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07d7be76-f5d6-4280-8009-01c1db25ee6e","Type":"ContainerStarted","Data":"89bbe6e7ca3b99944003d5007b9764aa09286a2f27274b6cbc2ca9273e89b24c"} Feb 18 19:56:24 crc kubenswrapper[4932]: I0218 19:56:24.356992 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07d7be76-f5d6-4280-8009-01c1db25ee6e","Type":"ContainerStarted","Data":"c4fa2a5b1772ec58a7601e53c0ae5987c2f3fd3000f703ae1787ee43bf85cc39"} Feb 18 19:56:24 crc kubenswrapper[4932]: I0218 19:56:24.358681 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:56:24 crc kubenswrapper[4932]: I0218 19:56:24.386419 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.280531854 podStartE2EDuration="5.386398629s" podCreationTimestamp="2026-02-18 19:56:19 +0000 UTC" firstStartedPulling="2026-02-18 19:56:20.198926653 +0000 UTC m=+1343.780881508" lastFinishedPulling="2026-02-18 19:56:23.304793438 +0000 UTC m=+1346.886748283" observedRunningTime="2026-02-18 19:56:24.381266632 +0000 UTC m=+1347.963221487" watchObservedRunningTime="2026-02-18 19:56:24.386398629 +0000 UTC m=+1347.968353474" Feb 18 19:56:24 crc kubenswrapper[4932]: I0218 19:56:24.836657 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 19:56:24 crc kubenswrapper[4932]: I0218 19:56:24.837257 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 19:56:24 crc kubenswrapper[4932]: I0218 19:56:24.842891 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 19:56:25 crc kubenswrapper[4932]: I0218 19:56:25.376046 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 19:56:26 crc kubenswrapper[4932]: I0218 19:56:26.380408 4932 generic.go:334] "Generic (PLEG): container finished" podID="59185a09-938b-47ba-99ed-1b81362038e0" containerID="f581b8c9ce44e42d3ff03f376a0f68bc8c6d3dd65d58f6d7b80411f3452dd5a6" exitCode=137 Feb 18 19:56:26 crc kubenswrapper[4932]: I0218 19:56:26.380539 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"59185a09-938b-47ba-99ed-1b81362038e0","Type":"ContainerDied","Data":"f581b8c9ce44e42d3ff03f376a0f68bc8c6d3dd65d58f6d7b80411f3452dd5a6"} Feb 18 19:56:26 crc kubenswrapper[4932]: I0218 19:56:26.383411 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"59185a09-938b-47ba-99ed-1b81362038e0","Type":"ContainerDied","Data":"372bb3654ce51919b696e3d9eb989784a6ab397c40b87b62e7e2d42b5443d7b8"} Feb 18 19:56:26 crc kubenswrapper[4932]: I0218 19:56:26.383444 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="372bb3654ce51919b696e3d9eb989784a6ab397c40b87b62e7e2d42b5443d7b8" Feb 18 19:56:26 crc kubenswrapper[4932]: I0218 19:56:26.461608 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:26 crc kubenswrapper[4932]: I0218 19:56:26.590245 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59185a09-938b-47ba-99ed-1b81362038e0-combined-ca-bundle\") pod \"59185a09-938b-47ba-99ed-1b81362038e0\" (UID: \"59185a09-938b-47ba-99ed-1b81362038e0\") " Feb 18 19:56:26 crc kubenswrapper[4932]: I0218 19:56:26.590453 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb946\" (UniqueName: \"kubernetes.io/projected/59185a09-938b-47ba-99ed-1b81362038e0-kube-api-access-cb946\") pod \"59185a09-938b-47ba-99ed-1b81362038e0\" (UID: \"59185a09-938b-47ba-99ed-1b81362038e0\") " Feb 18 19:56:26 crc kubenswrapper[4932]: I0218 19:56:26.590687 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59185a09-938b-47ba-99ed-1b81362038e0-config-data\") pod \"59185a09-938b-47ba-99ed-1b81362038e0\" (UID: \"59185a09-938b-47ba-99ed-1b81362038e0\") " Feb 18 19:56:26 crc kubenswrapper[4932]: I0218 19:56:26.596575 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59185a09-938b-47ba-99ed-1b81362038e0-kube-api-access-cb946" (OuterVolumeSpecName: "kube-api-access-cb946") pod "59185a09-938b-47ba-99ed-1b81362038e0" (UID: "59185a09-938b-47ba-99ed-1b81362038e0"). InnerVolumeSpecName "kube-api-access-cb946". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:26 crc kubenswrapper[4932]: I0218 19:56:26.617918 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59185a09-938b-47ba-99ed-1b81362038e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59185a09-938b-47ba-99ed-1b81362038e0" (UID: "59185a09-938b-47ba-99ed-1b81362038e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:26 crc kubenswrapper[4932]: I0218 19:56:26.621061 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59185a09-938b-47ba-99ed-1b81362038e0-config-data" (OuterVolumeSpecName: "config-data") pod "59185a09-938b-47ba-99ed-1b81362038e0" (UID: "59185a09-938b-47ba-99ed-1b81362038e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:26 crc kubenswrapper[4932]: I0218 19:56:26.694189 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59185a09-938b-47ba-99ed-1b81362038e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:26 crc kubenswrapper[4932]: I0218 19:56:26.694227 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb946\" (UniqueName: \"kubernetes.io/projected/59185a09-938b-47ba-99ed-1b81362038e0-kube-api-access-cb946\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:26 crc kubenswrapper[4932]: I0218 19:56:26.694241 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59185a09-938b-47ba-99ed-1b81362038e0-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.392335 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.423159 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.434920 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.445726 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:56:27 crc kubenswrapper[4932]: E0218 19:56:27.446527 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59185a09-938b-47ba-99ed-1b81362038e0" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.446562 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="59185a09-938b-47ba-99ed-1b81362038e0" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.446891 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="59185a09-938b-47ba-99ed-1b81362038e0" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.448102 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.451306 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.452978 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.455316 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.463395 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.518139 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6v4z\" (UniqueName: \"kubernetes.io/projected/e0af656d-88c5-4f09-bb21-d7b1d6f85ec7-kube-api-access-z6v4z\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0af656d-88c5-4f09-bb21-d7b1d6f85ec7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.518282 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0af656d-88c5-4f09-bb21-d7b1d6f85ec7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0af656d-88c5-4f09-bb21-d7b1d6f85ec7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.518401 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0af656d-88c5-4f09-bb21-d7b1d6f85ec7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0af656d-88c5-4f09-bb21-d7b1d6f85ec7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.518690 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0af656d-88c5-4f09-bb21-d7b1d6f85ec7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0af656d-88c5-4f09-bb21-d7b1d6f85ec7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.518737 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0af656d-88c5-4f09-bb21-d7b1d6f85ec7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0af656d-88c5-4f09-bb21-d7b1d6f85ec7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.606328 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.606401 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.606457 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.607676 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"691ac26b2e0eb4976dab73dc438ad2163dc0ad731157e8dbe0e2c19541cba856"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.607768 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://691ac26b2e0eb4976dab73dc438ad2163dc0ad731157e8dbe0e2c19541cba856" gracePeriod=600 Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.621109 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6v4z\" (UniqueName: \"kubernetes.io/projected/e0af656d-88c5-4f09-bb21-d7b1d6f85ec7-kube-api-access-z6v4z\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0af656d-88c5-4f09-bb21-d7b1d6f85ec7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.621200 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0af656d-88c5-4f09-bb21-d7b1d6f85ec7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0af656d-88c5-4f09-bb21-d7b1d6f85ec7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.621287 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0af656d-88c5-4f09-bb21-d7b1d6f85ec7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0af656d-88c5-4f09-bb21-d7b1d6f85ec7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.621405 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0af656d-88c5-4f09-bb21-d7b1d6f85ec7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0af656d-88c5-4f09-bb21-d7b1d6f85ec7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.621434 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0af656d-88c5-4f09-bb21-d7b1d6f85ec7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0af656d-88c5-4f09-bb21-d7b1d6f85ec7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.627278 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0af656d-88c5-4f09-bb21-d7b1d6f85ec7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0af656d-88c5-4f09-bb21-d7b1d6f85ec7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.627299 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0af656d-88c5-4f09-bb21-d7b1d6f85ec7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0af656d-88c5-4f09-bb21-d7b1d6f85ec7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.629790 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0af656d-88c5-4f09-bb21-d7b1d6f85ec7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0af656d-88c5-4f09-bb21-d7b1d6f85ec7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.630713 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0af656d-88c5-4f09-bb21-d7b1d6f85ec7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0af656d-88c5-4f09-bb21-d7b1d6f85ec7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.646497 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6v4z\" (UniqueName: \"kubernetes.io/projected/e0af656d-88c5-4f09-bb21-d7b1d6f85ec7-kube-api-access-z6v4z\") pod \"nova-cell1-novncproxy-0\" (UID: \"e0af656d-88c5-4f09-bb21-d7b1d6f85ec7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:27 crc kubenswrapper[4932]: I0218 19:56:27.768629 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:28 crc kubenswrapper[4932]: I0218 19:56:28.293918 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:56:28 crc kubenswrapper[4932]: W0218 19:56:28.301023 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0af656d_88c5_4f09_bb21_d7b1d6f85ec7.slice/crio-276393bcf286239081abcb5960133251e62c31ff6a18510de26e14a339a095e8 WatchSource:0}: Error finding container 276393bcf286239081abcb5960133251e62c31ff6a18510de26e14a339a095e8: Status 404 returned error can't find the container with id 276393bcf286239081abcb5960133251e62c31ff6a18510de26e14a339a095e8 Feb 18 19:56:28 crc kubenswrapper[4932]: I0218 19:56:28.407902 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e0af656d-88c5-4f09-bb21-d7b1d6f85ec7","Type":"ContainerStarted","Data":"276393bcf286239081abcb5960133251e62c31ff6a18510de26e14a339a095e8"} Feb 18 19:56:28 crc kubenswrapper[4932]: I0218 19:56:28.414049 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="691ac26b2e0eb4976dab73dc438ad2163dc0ad731157e8dbe0e2c19541cba856" exitCode=0 Feb 18 19:56:28 crc kubenswrapper[4932]: I0218 19:56:28.414104 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"691ac26b2e0eb4976dab73dc438ad2163dc0ad731157e8dbe0e2c19541cba856"} Feb 18 19:56:28 crc kubenswrapper[4932]: I0218 19:56:28.414137 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d"} Feb 18 19:56:28 crc kubenswrapper[4932]: I0218 19:56:28.414157 4932 scope.go:117] "RemoveContainer" containerID="435f6d4431c63fe1b1d0a709b03d86681659a5d37fb618d6ab36ba1010fce349" Feb 18 19:56:29 crc kubenswrapper[4932]: I0218 19:56:29.214067 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59185a09-938b-47ba-99ed-1b81362038e0" path="/var/lib/kubelet/pods/59185a09-938b-47ba-99ed-1b81362038e0/volumes" Feb 18 19:56:29 crc kubenswrapper[4932]: I0218 19:56:29.429710 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e0af656d-88c5-4f09-bb21-d7b1d6f85ec7","Type":"ContainerStarted","Data":"675ee2536c11f30aaee26a576c5150d248f738a44e5ecfaa73a2f894e21b79b7"} Feb 18 19:56:29 crc kubenswrapper[4932]: I0218 19:56:29.457570 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.457554697 podStartE2EDuration="2.457554697s" podCreationTimestamp="2026-02-18 19:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:56:29.456124152 +0000 UTC m=+1353.038079007" watchObservedRunningTime="2026-02-18 19:56:29.457554697 +0000 UTC m=+1353.039509542" Feb 18 19:56:32 crc kubenswrapper[4932]: I0218 19:56:32.769210 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.161501 4932 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","poda445a66f-1685-4542-89c3-012fef147a76"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort poda445a66f-1685-4542-89c3-012fef147a76] : Timed out while waiting for systemd to remove kubepods-besteffort-poda445a66f_1685_4542_89c3_012fef147a76.slice" Feb 18 19:56:37 crc kubenswrapper[4932]: E0218 19:56:37.162014 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort poda445a66f-1685-4542-89c3-012fef147a76] : unable to destroy cgroup paths for cgroup [kubepods besteffort poda445a66f-1685-4542-89c3-012fef147a76] : Timed out while waiting for systemd to remove kubepods-besteffort-poda445a66f_1685_4542_89c3_012fef147a76.slice" pod="openstack/nova-api-0" podUID="a445a66f-1685-4542-89c3-012fef147a76" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.509855 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.542091 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.554655 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.568245 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.570485 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.579895 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.609789 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.626064 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ecfead8-d016-48b2-bf3f-f3583a73b86c-config-data\") pod \"nova-api-0\" (UID: \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\") " pod="openstack/nova-api-0" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.626126 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ecfead8-d016-48b2-bf3f-f3583a73b86c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\") " pod="openstack/nova-api-0" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.626163 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ecfead8-d016-48b2-bf3f-f3583a73b86c-logs\") pod \"nova-api-0\" (UID: \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\") " pod="openstack/nova-api-0" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.626468 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw78k\" (UniqueName: \"kubernetes.io/projected/7ecfead8-d016-48b2-bf3f-f3583a73b86c-kube-api-access-bw78k\") pod \"nova-api-0\" (UID: \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\") " pod="openstack/nova-api-0" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.728604 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw78k\" (UniqueName: \"kubernetes.io/projected/7ecfead8-d016-48b2-bf3f-f3583a73b86c-kube-api-access-bw78k\") pod \"nova-api-0\" (UID: \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\") " pod="openstack/nova-api-0" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.728825 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ecfead8-d016-48b2-bf3f-f3583a73b86c-config-data\") pod \"nova-api-0\" (UID: \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\") " pod="openstack/nova-api-0" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.728861 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ecfead8-d016-48b2-bf3f-f3583a73b86c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\") " pod="openstack/nova-api-0" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.728904 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ecfead8-d016-48b2-bf3f-f3583a73b86c-logs\") pod \"nova-api-0\" (UID: \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\") " pod="openstack/nova-api-0" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.729565 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ecfead8-d016-48b2-bf3f-f3583a73b86c-logs\") pod \"nova-api-0\" (UID: \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\") " pod="openstack/nova-api-0" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.737340 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ecfead8-d016-48b2-bf3f-f3583a73b86c-config-data\") pod \"nova-api-0\" (UID: \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\") " pod="openstack/nova-api-0" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.749056 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ecfead8-d016-48b2-bf3f-f3583a73b86c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\") " pod="openstack/nova-api-0" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.755770 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw78k\" (UniqueName: \"kubernetes.io/projected/7ecfead8-d016-48b2-bf3f-f3583a73b86c-kube-api-access-bw78k\") pod \"nova-api-0\" (UID: \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\") " pod="openstack/nova-api-0" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.769586 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.799919 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:37 crc kubenswrapper[4932]: I0218 19:56:37.919263 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.383066 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:56:38 crc kubenswrapper[4932]: W0218 19:56:38.383698 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ecfead8_d016_48b2_bf3f_f3583a73b86c.slice/crio-19ffa197778275bcd9f48f91d00a44722c32c65bb9ad6a8add9bdfb08abe1a4f WatchSource:0}: Error finding container 19ffa197778275bcd9f48f91d00a44722c32c65bb9ad6a8add9bdfb08abe1a4f: Status 404 returned error can't find the container with id 19ffa197778275bcd9f48f91d00a44722c32c65bb9ad6a8add9bdfb08abe1a4f Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.532794 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ecfead8-d016-48b2-bf3f-f3583a73b86c","Type":"ContainerStarted","Data":"19ffa197778275bcd9f48f91d00a44722c32c65bb9ad6a8add9bdfb08abe1a4f"} Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.569299 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.799718 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-kf5w6"] Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.801232 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-kf5w6" Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.804872 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.805109 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.815193 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-kf5w6"] Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.858492 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-config-data\") pod \"nova-cell1-cell-mapping-kf5w6\" (UID: \"738744b3-86e1-432c-8380-0d428a2e8263\") " pod="openstack/nova-cell1-cell-mapping-kf5w6" Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.858685 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmcwn\" (UniqueName: \"kubernetes.io/projected/738744b3-86e1-432c-8380-0d428a2e8263-kube-api-access-cmcwn\") pod \"nova-cell1-cell-mapping-kf5w6\" (UID: \"738744b3-86e1-432c-8380-0d428a2e8263\") " pod="openstack/nova-cell1-cell-mapping-kf5w6" Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.858787 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-scripts\") pod \"nova-cell1-cell-mapping-kf5w6\" (UID: \"738744b3-86e1-432c-8380-0d428a2e8263\") " pod="openstack/nova-cell1-cell-mapping-kf5w6" Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.858911 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-kf5w6\" (UID: \"738744b3-86e1-432c-8380-0d428a2e8263\") " pod="openstack/nova-cell1-cell-mapping-kf5w6" Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.960808 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-config-data\") pod \"nova-cell1-cell-mapping-kf5w6\" (UID: \"738744b3-86e1-432c-8380-0d428a2e8263\") " pod="openstack/nova-cell1-cell-mapping-kf5w6" Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.960940 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmcwn\" (UniqueName: \"kubernetes.io/projected/738744b3-86e1-432c-8380-0d428a2e8263-kube-api-access-cmcwn\") pod \"nova-cell1-cell-mapping-kf5w6\" (UID: \"738744b3-86e1-432c-8380-0d428a2e8263\") " pod="openstack/nova-cell1-cell-mapping-kf5w6" Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.960993 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-scripts\") pod \"nova-cell1-cell-mapping-kf5w6\" (UID: \"738744b3-86e1-432c-8380-0d428a2e8263\") " pod="openstack/nova-cell1-cell-mapping-kf5w6" Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.961049 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-kf5w6\" (UID: \"738744b3-86e1-432c-8380-0d428a2e8263\") " pod="openstack/nova-cell1-cell-mapping-kf5w6" Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.965694 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-config-data\") pod \"nova-cell1-cell-mapping-kf5w6\" (UID: \"738744b3-86e1-432c-8380-0d428a2e8263\") " pod="openstack/nova-cell1-cell-mapping-kf5w6" Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.966347 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-kf5w6\" (UID: \"738744b3-86e1-432c-8380-0d428a2e8263\") " pod="openstack/nova-cell1-cell-mapping-kf5w6" Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.970903 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-scripts\") pod \"nova-cell1-cell-mapping-kf5w6\" (UID: \"738744b3-86e1-432c-8380-0d428a2e8263\") " pod="openstack/nova-cell1-cell-mapping-kf5w6" Feb 18 19:56:38 crc kubenswrapper[4932]: I0218 19:56:38.981819 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmcwn\" (UniqueName: \"kubernetes.io/projected/738744b3-86e1-432c-8380-0d428a2e8263-kube-api-access-cmcwn\") pod \"nova-cell1-cell-mapping-kf5w6\" (UID: \"738744b3-86e1-432c-8380-0d428a2e8263\") " pod="openstack/nova-cell1-cell-mapping-kf5w6" Feb 18 19:56:39 crc kubenswrapper[4932]: I0218 19:56:39.125652 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-kf5w6" Feb 18 19:56:39 crc kubenswrapper[4932]: I0218 19:56:39.192609 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a445a66f-1685-4542-89c3-012fef147a76" path="/var/lib/kubelet/pods/a445a66f-1685-4542-89c3-012fef147a76/volumes" Feb 18 19:56:39 crc kubenswrapper[4932]: I0218 19:56:39.540655 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ecfead8-d016-48b2-bf3f-f3583a73b86c","Type":"ContainerStarted","Data":"2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03"} Feb 18 19:56:39 crc kubenswrapper[4932]: I0218 19:56:39.540925 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ecfead8-d016-48b2-bf3f-f3583a73b86c","Type":"ContainerStarted","Data":"7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed"} Feb 18 19:56:39 crc kubenswrapper[4932]: I0218 19:56:39.597311 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.597212668 podStartE2EDuration="2.597212668s" podCreationTimestamp="2026-02-18 19:56:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:56:39.595599778 +0000 UTC m=+1363.177554633" watchObservedRunningTime="2026-02-18 19:56:39.597212668 +0000 UTC m=+1363.179167523" Feb 18 19:56:39 crc kubenswrapper[4932]: I0218 19:56:39.625003 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-kf5w6"] Feb 18 19:56:40 crc kubenswrapper[4932]: I0218 19:56:40.549990 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-kf5w6" event={"ID":"738744b3-86e1-432c-8380-0d428a2e8263","Type":"ContainerStarted","Data":"e000a4553afc7ad7dbb58680bc4724da86a258372aee2e0c10f7e863173c5a10"} Feb 18 19:56:40 crc kubenswrapper[4932]: I0218 19:56:40.550533 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-kf5w6" event={"ID":"738744b3-86e1-432c-8380-0d428a2e8263","Type":"ContainerStarted","Data":"145cdbac6aac3a6ef91498bcc0059c99e75808662cbf2f4f042a83ee54006140"} Feb 18 19:56:40 crc kubenswrapper[4932]: I0218 19:56:40.569820 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-kf5w6" podStartSLOduration=2.5697963 podStartE2EDuration="2.5697963s" podCreationTimestamp="2026-02-18 19:56:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:56:40.56341705 +0000 UTC m=+1364.145371905" watchObservedRunningTime="2026-02-18 19:56:40.5697963 +0000 UTC m=+1364.151751155" Feb 18 19:56:45 crc kubenswrapper[4932]: I0218 19:56:45.616756 4932 generic.go:334] "Generic (PLEG): container finished" podID="738744b3-86e1-432c-8380-0d428a2e8263" containerID="e000a4553afc7ad7dbb58680bc4724da86a258372aee2e0c10f7e863173c5a10" exitCode=0 Feb 18 19:56:45 crc kubenswrapper[4932]: I0218 19:56:45.616887 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-kf5w6" event={"ID":"738744b3-86e1-432c-8380-0d428a2e8263","Type":"ContainerDied","Data":"e000a4553afc7ad7dbb58680bc4724da86a258372aee2e0c10f7e863173c5a10"} Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.029281 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-kf5w6" Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.153330 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-combined-ca-bundle\") pod \"738744b3-86e1-432c-8380-0d428a2e8263\" (UID: \"738744b3-86e1-432c-8380-0d428a2e8263\") " Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.153486 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-scripts\") pod \"738744b3-86e1-432c-8380-0d428a2e8263\" (UID: \"738744b3-86e1-432c-8380-0d428a2e8263\") " Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.153601 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-config-data\") pod \"738744b3-86e1-432c-8380-0d428a2e8263\" (UID: \"738744b3-86e1-432c-8380-0d428a2e8263\") " Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.153698 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmcwn\" (UniqueName: \"kubernetes.io/projected/738744b3-86e1-432c-8380-0d428a2e8263-kube-api-access-cmcwn\") pod \"738744b3-86e1-432c-8380-0d428a2e8263\" (UID: \"738744b3-86e1-432c-8380-0d428a2e8263\") " Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.160342 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-scripts" (OuterVolumeSpecName: "scripts") pod "738744b3-86e1-432c-8380-0d428a2e8263" (UID: "738744b3-86e1-432c-8380-0d428a2e8263"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.163497 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/738744b3-86e1-432c-8380-0d428a2e8263-kube-api-access-cmcwn" (OuterVolumeSpecName: "kube-api-access-cmcwn") pod "738744b3-86e1-432c-8380-0d428a2e8263" (UID: "738744b3-86e1-432c-8380-0d428a2e8263"). InnerVolumeSpecName "kube-api-access-cmcwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.190442 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-config-data" (OuterVolumeSpecName: "config-data") pod "738744b3-86e1-432c-8380-0d428a2e8263" (UID: "738744b3-86e1-432c-8380-0d428a2e8263"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.194766 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "738744b3-86e1-432c-8380-0d428a2e8263" (UID: "738744b3-86e1-432c-8380-0d428a2e8263"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.255729 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmcwn\" (UniqueName: \"kubernetes.io/projected/738744b3-86e1-432c-8380-0d428a2e8263-kube-api-access-cmcwn\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.255759 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.255768 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.255777 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/738744b3-86e1-432c-8380-0d428a2e8263-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.637549 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-kf5w6" event={"ID":"738744b3-86e1-432c-8380-0d428a2e8263","Type":"ContainerDied","Data":"145cdbac6aac3a6ef91498bcc0059c99e75808662cbf2f4f042a83ee54006140"} Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.637591 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="145cdbac6aac3a6ef91498bcc0059c99e75808662cbf2f4f042a83ee54006140" Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.637608 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-kf5w6" Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.828987 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.829273 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7ecfead8-d016-48b2-bf3f-f3583a73b86c" containerName="nova-api-log" containerID="cri-o://7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed" gracePeriod=30 Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.829374 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7ecfead8-d016-48b2-bf3f-f3583a73b86c" containerName="nova-api-api" containerID="cri-o://2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03" gracePeriod=30 Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.843958 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.844247 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="73e788d9-865f-453f-bdca-1de3b96af3e7" containerName="nova-scheduler-scheduler" containerID="cri-o://e9c9b5cc67858791a5222fc4a66b9a665582e43139a0ec213ce7bb500dba90ac" gracePeriod=30 Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.863621 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.864187 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f60cae27-f16b-4874-800f-f94fc2ce849f" containerName="nova-metadata-log" containerID="cri-o://e0182ed6367a5a181d31bfd848c8732bc890073ca07bb24e814dcf3dda3b39a9" gracePeriod=30 Feb 18 19:56:47 crc kubenswrapper[4932]: I0218 19:56:47.864309 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f60cae27-f16b-4874-800f-f94fc2ce849f" containerName="nova-metadata-metadata" containerID="cri-o://9bd9281f99ab075955fe8b5af26c0c780a0d2bc911ceed153986d0271a1ec7e3" gracePeriod=30 Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.493805 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.583977 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ecfead8-d016-48b2-bf3f-f3583a73b86c-combined-ca-bundle\") pod \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\" (UID: \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\") " Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.584055 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw78k\" (UniqueName: \"kubernetes.io/projected/7ecfead8-d016-48b2-bf3f-f3583a73b86c-kube-api-access-bw78k\") pod \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\" (UID: \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\") " Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.584218 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ecfead8-d016-48b2-bf3f-f3583a73b86c-logs\") pod \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\" (UID: \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\") " Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.584306 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ecfead8-d016-48b2-bf3f-f3583a73b86c-config-data\") pod \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\" (UID: \"7ecfead8-d016-48b2-bf3f-f3583a73b86c\") " Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.585067 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ecfead8-d016-48b2-bf3f-f3583a73b86c-logs" (OuterVolumeSpecName: "logs") pod "7ecfead8-d016-48b2-bf3f-f3583a73b86c" (UID: "7ecfead8-d016-48b2-bf3f-f3583a73b86c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.589092 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ecfead8-d016-48b2-bf3f-f3583a73b86c-kube-api-access-bw78k" (OuterVolumeSpecName: "kube-api-access-bw78k") pod "7ecfead8-d016-48b2-bf3f-f3583a73b86c" (UID: "7ecfead8-d016-48b2-bf3f-f3583a73b86c"). InnerVolumeSpecName "kube-api-access-bw78k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.613887 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ecfead8-d016-48b2-bf3f-f3583a73b86c-config-data" (OuterVolumeSpecName: "config-data") pod "7ecfead8-d016-48b2-bf3f-f3583a73b86c" (UID: "7ecfead8-d016-48b2-bf3f-f3583a73b86c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.620643 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ecfead8-d016-48b2-bf3f-f3583a73b86c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ecfead8-d016-48b2-bf3f-f3583a73b86c" (UID: "7ecfead8-d016-48b2-bf3f-f3583a73b86c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.658502 4932 generic.go:334] "Generic (PLEG): container finished" podID="73e788d9-865f-453f-bdca-1de3b96af3e7" containerID="e9c9b5cc67858791a5222fc4a66b9a665582e43139a0ec213ce7bb500dba90ac" exitCode=0 Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.658561 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"73e788d9-865f-453f-bdca-1de3b96af3e7","Type":"ContainerDied","Data":"e9c9b5cc67858791a5222fc4a66b9a665582e43139a0ec213ce7bb500dba90ac"} Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.664553 4932 generic.go:334] "Generic (PLEG): container finished" podID="f60cae27-f16b-4874-800f-f94fc2ce849f" containerID="e0182ed6367a5a181d31bfd848c8732bc890073ca07bb24e814dcf3dda3b39a9" exitCode=143 Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.664668 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f60cae27-f16b-4874-800f-f94fc2ce849f","Type":"ContainerDied","Data":"e0182ed6367a5a181d31bfd848c8732bc890073ca07bb24e814dcf3dda3b39a9"} Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.667618 4932 generic.go:334] "Generic (PLEG): container finished" podID="7ecfead8-d016-48b2-bf3f-f3583a73b86c" containerID="2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03" exitCode=0 Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.667727 4932 generic.go:334] "Generic (PLEG): container finished" podID="7ecfead8-d016-48b2-bf3f-f3583a73b86c" containerID="7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed" exitCode=143 Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.667798 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ecfead8-d016-48b2-bf3f-f3583a73b86c","Type":"ContainerDied","Data":"2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03"} Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.667887 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ecfead8-d016-48b2-bf3f-f3583a73b86c","Type":"ContainerDied","Data":"7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed"} Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.667965 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ecfead8-d016-48b2-bf3f-f3583a73b86c","Type":"ContainerDied","Data":"19ffa197778275bcd9f48f91d00a44722c32c65bb9ad6a8add9bdfb08abe1a4f"} Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.668045 4932 scope.go:117] "RemoveContainer" containerID="2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.668285 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.686869 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ecfead8-d016-48b2-bf3f-f3583a73b86c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.686904 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bw78k\" (UniqueName: \"kubernetes.io/projected/7ecfead8-d016-48b2-bf3f-f3583a73b86c-kube-api-access-bw78k\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.686919 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ecfead8-d016-48b2-bf3f-f3583a73b86c-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.686930 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ecfead8-d016-48b2-bf3f-f3583a73b86c-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.705490 4932 scope.go:117] "RemoveContainer" containerID="7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.747421 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.755673 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.774944 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 19:56:48 crc kubenswrapper[4932]: E0218 19:56:48.775610 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ecfead8-d016-48b2-bf3f-f3583a73b86c" containerName="nova-api-api" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.775636 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ecfead8-d016-48b2-bf3f-f3583a73b86c" containerName="nova-api-api" Feb 18 19:56:48 crc kubenswrapper[4932]: E0218 19:56:48.775684 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ecfead8-d016-48b2-bf3f-f3583a73b86c" containerName="nova-api-log" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.775693 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ecfead8-d016-48b2-bf3f-f3583a73b86c" containerName="nova-api-log" Feb 18 19:56:48 crc kubenswrapper[4932]: E0218 19:56:48.775705 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="738744b3-86e1-432c-8380-0d428a2e8263" containerName="nova-manage" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.775713 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="738744b3-86e1-432c-8380-0d428a2e8263" containerName="nova-manage" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.775937 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="738744b3-86e1-432c-8380-0d428a2e8263" containerName="nova-manage" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.775963 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ecfead8-d016-48b2-bf3f-f3583a73b86c" containerName="nova-api-log" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.775978 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ecfead8-d016-48b2-bf3f-f3583a73b86c" containerName="nova-api-api" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.777306 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.784578 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.790848 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.800989 4932 scope.go:117] "RemoveContainer" containerID="2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03" Feb 18 19:56:48 crc kubenswrapper[4932]: E0218 19:56:48.801560 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03\": container with ID starting with 2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03 not found: ID does not exist" containerID="2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.801617 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03"} err="failed to get container status \"2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03\": rpc error: code = NotFound desc = could not find container \"2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03\": container with ID starting with 2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03 not found: ID does not exist" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.801656 4932 scope.go:117] "RemoveContainer" containerID="7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed" Feb 18 19:56:48 crc kubenswrapper[4932]: E0218 19:56:48.802019 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed\": container with ID starting with 7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed not found: ID does not exist" containerID="7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.802056 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed"} err="failed to get container status \"7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed\": rpc error: code = NotFound desc = could not find container \"7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed\": container with ID starting with 7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed not found: ID does not exist" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.802082 4932 scope.go:117] "RemoveContainer" containerID="2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.802402 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03"} err="failed to get container status \"2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03\": rpc error: code = NotFound desc = could not find container \"2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03\": container with ID starting with 2e6a7842ec70e52ef91342c6d36ceaecdc9ce13bbb7821cda8fb25326a88ee03 not found: ID does not exist" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.802446 4932 scope.go:117] "RemoveContainer" containerID="7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.802734 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed"} err="failed to get container status \"7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed\": rpc error: code = NotFound desc = could not find container \"7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed\": container with ID starting with 7b3172ef67d8f38f7222d157519dade379701512484a1dc8dc13053e318532ed not found: ID does not exist" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.841858 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.891710 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-logs\") pod \"nova-api-0\" (UID: \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\") " pod="openstack/nova-api-0" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.891841 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\") " pod="openstack/nova-api-0" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.891933 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqmmj\" (UniqueName: \"kubernetes.io/projected/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-kube-api-access-xqmmj\") pod \"nova-api-0\" (UID: \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\") " pod="openstack/nova-api-0" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.891969 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-config-data\") pod \"nova-api-0\" (UID: \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\") " pod="openstack/nova-api-0" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.993702 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73e788d9-865f-453f-bdca-1de3b96af3e7-combined-ca-bundle\") pod \"73e788d9-865f-453f-bdca-1de3b96af3e7\" (UID: \"73e788d9-865f-453f-bdca-1de3b96af3e7\") " Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.993818 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73e788d9-865f-453f-bdca-1de3b96af3e7-config-data\") pod \"73e788d9-865f-453f-bdca-1de3b96af3e7\" (UID: \"73e788d9-865f-453f-bdca-1de3b96af3e7\") " Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.993924 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chq2r\" (UniqueName: \"kubernetes.io/projected/73e788d9-865f-453f-bdca-1de3b96af3e7-kube-api-access-chq2r\") pod \"73e788d9-865f-453f-bdca-1de3b96af3e7\" (UID: \"73e788d9-865f-453f-bdca-1de3b96af3e7\") " Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.994368 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\") " pod="openstack/nova-api-0" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.994497 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqmmj\" (UniqueName: \"kubernetes.io/projected/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-kube-api-access-xqmmj\") pod \"nova-api-0\" (UID: \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\") " pod="openstack/nova-api-0" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.994551 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-config-data\") pod \"nova-api-0\" (UID: \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\") " pod="openstack/nova-api-0" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.994741 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-logs\") pod \"nova-api-0\" (UID: \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\") " pod="openstack/nova-api-0" Feb 18 19:56:48 crc kubenswrapper[4932]: I0218 19:56:48.995272 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-logs\") pod \"nova-api-0\" (UID: \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\") " pod="openstack/nova-api-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:48.999774 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\") " pod="openstack/nova-api-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.000025 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73e788d9-865f-453f-bdca-1de3b96af3e7-kube-api-access-chq2r" (OuterVolumeSpecName: "kube-api-access-chq2r") pod "73e788d9-865f-453f-bdca-1de3b96af3e7" (UID: "73e788d9-865f-453f-bdca-1de3b96af3e7"). InnerVolumeSpecName "kube-api-access-chq2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.010764 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-config-data\") pod \"nova-api-0\" (UID: \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\") " pod="openstack/nova-api-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.024261 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqmmj\" (UniqueName: \"kubernetes.io/projected/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-kube-api-access-xqmmj\") pod \"nova-api-0\" (UID: \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\") " pod="openstack/nova-api-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.027519 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73e788d9-865f-453f-bdca-1de3b96af3e7-config-data" (OuterVolumeSpecName: "config-data") pod "73e788d9-865f-453f-bdca-1de3b96af3e7" (UID: "73e788d9-865f-453f-bdca-1de3b96af3e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.058583 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73e788d9-865f-453f-bdca-1de3b96af3e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73e788d9-865f-453f-bdca-1de3b96af3e7" (UID: "73e788d9-865f-453f-bdca-1de3b96af3e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.095954 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73e788d9-865f-453f-bdca-1de3b96af3e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.095984 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73e788d9-865f-453f-bdca-1de3b96af3e7-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.095993 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chq2r\" (UniqueName: \"kubernetes.io/projected/73e788d9-865f-453f-bdca-1de3b96af3e7-kube-api-access-chq2r\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.138516 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.193743 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ecfead8-d016-48b2-bf3f-f3583a73b86c" path="/var/lib/kubelet/pods/7ecfead8-d016-48b2-bf3f-f3583a73b86c/volumes" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.215859 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.300220 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-nova-metadata-tls-certs\") pod \"f60cae27-f16b-4874-800f-f94fc2ce849f\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.300541 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frsfw\" (UniqueName: \"kubernetes.io/projected/f60cae27-f16b-4874-800f-f94fc2ce849f-kube-api-access-frsfw\") pod \"f60cae27-f16b-4874-800f-f94fc2ce849f\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.300707 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-config-data\") pod \"f60cae27-f16b-4874-800f-f94fc2ce849f\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.300774 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-combined-ca-bundle\") pod \"f60cae27-f16b-4874-800f-f94fc2ce849f\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.300868 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f60cae27-f16b-4874-800f-f94fc2ce849f-logs\") pod \"f60cae27-f16b-4874-800f-f94fc2ce849f\" (UID: \"f60cae27-f16b-4874-800f-f94fc2ce849f\") " Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.301377 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f60cae27-f16b-4874-800f-f94fc2ce849f-logs" (OuterVolumeSpecName: "logs") pod "f60cae27-f16b-4874-800f-f94fc2ce849f" (UID: "f60cae27-f16b-4874-800f-f94fc2ce849f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.302590 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f60cae27-f16b-4874-800f-f94fc2ce849f-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.304158 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f60cae27-f16b-4874-800f-f94fc2ce849f-kube-api-access-frsfw" (OuterVolumeSpecName: "kube-api-access-frsfw") pod "f60cae27-f16b-4874-800f-f94fc2ce849f" (UID: "f60cae27-f16b-4874-800f-f94fc2ce849f"). InnerVolumeSpecName "kube-api-access-frsfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.333422 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f60cae27-f16b-4874-800f-f94fc2ce849f" (UID: "f60cae27-f16b-4874-800f-f94fc2ce849f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.347296 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-config-data" (OuterVolumeSpecName: "config-data") pod "f60cae27-f16b-4874-800f-f94fc2ce849f" (UID: "f60cae27-f16b-4874-800f-f94fc2ce849f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.371418 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "f60cae27-f16b-4874-800f-f94fc2ce849f" (UID: "f60cae27-f16b-4874-800f-f94fc2ce849f"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.404113 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.404139 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.404150 4932 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f60cae27-f16b-4874-800f-f94fc2ce849f-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.404161 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frsfw\" (UniqueName: \"kubernetes.io/projected/f60cae27-f16b-4874-800f-f94fc2ce849f-kube-api-access-frsfw\") on node \"crc\" DevicePath \"\"" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.639151 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.681294 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"73e788d9-865f-453f-bdca-1de3b96af3e7","Type":"ContainerDied","Data":"548cdd4f66aaff2ab06550adbc3a37a85c5a81145cf7ceddc60485f6dfd2fbce"} Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.681339 4932 scope.go:117] "RemoveContainer" containerID="e9c9b5cc67858791a5222fc4a66b9a665582e43139a0ec213ce7bb500dba90ac" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.681415 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.686878 4932 generic.go:334] "Generic (PLEG): container finished" podID="f60cae27-f16b-4874-800f-f94fc2ce849f" containerID="9bd9281f99ab075955fe8b5af26c0c780a0d2bc911ceed153986d0271a1ec7e3" exitCode=0 Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.686935 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.686956 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f60cae27-f16b-4874-800f-f94fc2ce849f","Type":"ContainerDied","Data":"9bd9281f99ab075955fe8b5af26c0c780a0d2bc911ceed153986d0271a1ec7e3"} Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.686986 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f60cae27-f16b-4874-800f-f94fc2ce849f","Type":"ContainerDied","Data":"6cc5ccb6f5aea2a8c8a0f89d07f1d4998a02b0b51bdac9b6c2ee10d527679c5a"} Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.693229 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"99f0bb69-5596-4997-b53f-9ceb9aa7cac1","Type":"ContainerStarted","Data":"d44a0859e6bc1ca146456cd319c226c1c97e6918ba7cf2e5b3fea2ceb5f507ac"} Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.708126 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.716441 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.740024 4932 scope.go:117] "RemoveContainer" containerID="9bd9281f99ab075955fe8b5af26c0c780a0d2bc911ceed153986d0271a1ec7e3" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.742338 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:56:49 crc kubenswrapper[4932]: E0218 19:56:49.742695 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f60cae27-f16b-4874-800f-f94fc2ce849f" containerName="nova-metadata-log" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.742707 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f60cae27-f16b-4874-800f-f94fc2ce849f" containerName="nova-metadata-log" Feb 18 19:56:49 crc kubenswrapper[4932]: E0218 19:56:49.742750 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f60cae27-f16b-4874-800f-f94fc2ce849f" containerName="nova-metadata-metadata" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.742757 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f60cae27-f16b-4874-800f-f94fc2ce849f" containerName="nova-metadata-metadata" Feb 18 19:56:49 crc kubenswrapper[4932]: E0218 19:56:49.742777 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73e788d9-865f-453f-bdca-1de3b96af3e7" containerName="nova-scheduler-scheduler" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.742783 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="73e788d9-865f-453f-bdca-1de3b96af3e7" containerName="nova-scheduler-scheduler" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.742953 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="73e788d9-865f-453f-bdca-1de3b96af3e7" containerName="nova-scheduler-scheduler" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.742965 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f60cae27-f16b-4874-800f-f94fc2ce849f" containerName="nova-metadata-metadata" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.742973 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f60cae27-f16b-4874-800f-f94fc2ce849f" containerName="nova-metadata-log" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.743676 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.752427 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.752565 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.752633 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.760699 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.786940 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.819623 4932 scope.go:117] "RemoveContainer" containerID="e0182ed6367a5a181d31bfd848c8732bc890073ca07bb24e814dcf3dda3b39a9" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.829484 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.831454 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.833747 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.833805 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.842785 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.858820 4932 scope.go:117] "RemoveContainer" containerID="9bd9281f99ab075955fe8b5af26c0c780a0d2bc911ceed153986d0271a1ec7e3" Feb 18 19:56:49 crc kubenswrapper[4932]: E0218 19:56:49.860310 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bd9281f99ab075955fe8b5af26c0c780a0d2bc911ceed153986d0271a1ec7e3\": container with ID starting with 9bd9281f99ab075955fe8b5af26c0c780a0d2bc911ceed153986d0271a1ec7e3 not found: ID does not exist" containerID="9bd9281f99ab075955fe8b5af26c0c780a0d2bc911ceed153986d0271a1ec7e3" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.860387 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bd9281f99ab075955fe8b5af26c0c780a0d2bc911ceed153986d0271a1ec7e3"} err="failed to get container status \"9bd9281f99ab075955fe8b5af26c0c780a0d2bc911ceed153986d0271a1ec7e3\": rpc error: code = NotFound desc = could not find container \"9bd9281f99ab075955fe8b5af26c0c780a0d2bc911ceed153986d0271a1ec7e3\": container with ID starting with 9bd9281f99ab075955fe8b5af26c0c780a0d2bc911ceed153986d0271a1ec7e3 not found: ID does not exist" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.860423 4932 scope.go:117] "RemoveContainer" containerID="e0182ed6367a5a181d31bfd848c8732bc890073ca07bb24e814dcf3dda3b39a9" Feb 18 19:56:49 crc kubenswrapper[4932]: E0218 19:56:49.861924 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0182ed6367a5a181d31bfd848c8732bc890073ca07bb24e814dcf3dda3b39a9\": container with ID starting with e0182ed6367a5a181d31bfd848c8732bc890073ca07bb24e814dcf3dda3b39a9 not found: ID does not exist" containerID="e0182ed6367a5a181d31bfd848c8732bc890073ca07bb24e814dcf3dda3b39a9" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.861947 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0182ed6367a5a181d31bfd848c8732bc890073ca07bb24e814dcf3dda3b39a9"} err="failed to get container status \"e0182ed6367a5a181d31bfd848c8732bc890073ca07bb24e814dcf3dda3b39a9\": rpc error: code = NotFound desc = could not find container \"e0182ed6367a5a181d31bfd848c8732bc890073ca07bb24e814dcf3dda3b39a9\": container with ID starting with e0182ed6367a5a181d31bfd848c8732bc890073ca07bb24e814dcf3dda3b39a9 not found: ID does not exist" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.914736 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1c40f97-715a-4ff5-a0f3-1c31cb982552-logs\") pod \"nova-metadata-0\" (UID: \"e1c40f97-715a-4ff5-a0f3-1c31cb982552\") " pod="openstack/nova-metadata-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.914838 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1c40f97-715a-4ff5-a0f3-1c31cb982552-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e1c40f97-715a-4ff5-a0f3-1c31cb982552\") " pod="openstack/nova-metadata-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.915007 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5b81937-96f5-42e2-b937-ab11c79ff3d0-config-data\") pod \"nova-scheduler-0\" (UID: \"f5b81937-96f5-42e2-b937-ab11c79ff3d0\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.915070 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2wzp\" (UniqueName: \"kubernetes.io/projected/f5b81937-96f5-42e2-b937-ab11c79ff3d0-kube-api-access-g2wzp\") pod \"nova-scheduler-0\" (UID: \"f5b81937-96f5-42e2-b937-ab11c79ff3d0\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.915295 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1c40f97-715a-4ff5-a0f3-1c31cb982552-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e1c40f97-715a-4ff5-a0f3-1c31cb982552\") " pod="openstack/nova-metadata-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.915437 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1c40f97-715a-4ff5-a0f3-1c31cb982552-config-data\") pod \"nova-metadata-0\" (UID: \"e1c40f97-715a-4ff5-a0f3-1c31cb982552\") " pod="openstack/nova-metadata-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.915500 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5b81937-96f5-42e2-b937-ab11c79ff3d0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f5b81937-96f5-42e2-b937-ab11c79ff3d0\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:49 crc kubenswrapper[4932]: I0218 19:56:49.915588 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2gvg\" (UniqueName: \"kubernetes.io/projected/e1c40f97-715a-4ff5-a0f3-1c31cb982552-kube-api-access-z2gvg\") pod \"nova-metadata-0\" (UID: \"e1c40f97-715a-4ff5-a0f3-1c31cb982552\") " pod="openstack/nova-metadata-0" Feb 18 19:56:49 crc kubenswrapper[4932]: E0218 19:56:49.939212 4932 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf60cae27_f16b_4874_800f_f94fc2ce849f.slice/crio-6cc5ccb6f5aea2a8c8a0f89d07f1d4998a02b0b51bdac9b6c2ee10d527679c5a\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf60cae27_f16b_4874_800f_f94fc2ce849f.slice\": RecentStats: unable to find data in memory cache]" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.017480 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1c40f97-715a-4ff5-a0f3-1c31cb982552-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e1c40f97-715a-4ff5-a0f3-1c31cb982552\") " pod="openstack/nova-metadata-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.017580 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5b81937-96f5-42e2-b937-ab11c79ff3d0-config-data\") pod \"nova-scheduler-0\" (UID: \"f5b81937-96f5-42e2-b937-ab11c79ff3d0\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.017605 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2wzp\" (UniqueName: \"kubernetes.io/projected/f5b81937-96f5-42e2-b937-ab11c79ff3d0-kube-api-access-g2wzp\") pod \"nova-scheduler-0\" (UID: \"f5b81937-96f5-42e2-b937-ab11c79ff3d0\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.017650 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1c40f97-715a-4ff5-a0f3-1c31cb982552-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e1c40f97-715a-4ff5-a0f3-1c31cb982552\") " pod="openstack/nova-metadata-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.017678 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1c40f97-715a-4ff5-a0f3-1c31cb982552-config-data\") pod \"nova-metadata-0\" (UID: \"e1c40f97-715a-4ff5-a0f3-1c31cb982552\") " pod="openstack/nova-metadata-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.017698 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5b81937-96f5-42e2-b937-ab11c79ff3d0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f5b81937-96f5-42e2-b937-ab11c79ff3d0\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.017720 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2gvg\" (UniqueName: \"kubernetes.io/projected/e1c40f97-715a-4ff5-a0f3-1c31cb982552-kube-api-access-z2gvg\") pod \"nova-metadata-0\" (UID: \"e1c40f97-715a-4ff5-a0f3-1c31cb982552\") " pod="openstack/nova-metadata-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.017753 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1c40f97-715a-4ff5-a0f3-1c31cb982552-logs\") pod \"nova-metadata-0\" (UID: \"e1c40f97-715a-4ff5-a0f3-1c31cb982552\") " pod="openstack/nova-metadata-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.018201 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1c40f97-715a-4ff5-a0f3-1c31cb982552-logs\") pod \"nova-metadata-0\" (UID: \"e1c40f97-715a-4ff5-a0f3-1c31cb982552\") " pod="openstack/nova-metadata-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.021724 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5b81937-96f5-42e2-b937-ab11c79ff3d0-config-data\") pod \"nova-scheduler-0\" (UID: \"f5b81937-96f5-42e2-b937-ab11c79ff3d0\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.021890 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1c40f97-715a-4ff5-a0f3-1c31cb982552-config-data\") pod \"nova-metadata-0\" (UID: \"e1c40f97-715a-4ff5-a0f3-1c31cb982552\") " pod="openstack/nova-metadata-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.023352 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1c40f97-715a-4ff5-a0f3-1c31cb982552-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e1c40f97-715a-4ff5-a0f3-1c31cb982552\") " pod="openstack/nova-metadata-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.025226 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5b81937-96f5-42e2-b937-ab11c79ff3d0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f5b81937-96f5-42e2-b937-ab11c79ff3d0\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.027493 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1c40f97-715a-4ff5-a0f3-1c31cb982552-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e1c40f97-715a-4ff5-a0f3-1c31cb982552\") " pod="openstack/nova-metadata-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.044194 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2gvg\" (UniqueName: \"kubernetes.io/projected/e1c40f97-715a-4ff5-a0f3-1c31cb982552-kube-api-access-z2gvg\") pod \"nova-metadata-0\" (UID: \"e1c40f97-715a-4ff5-a0f3-1c31cb982552\") " pod="openstack/nova-metadata-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.044359 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2wzp\" (UniqueName: \"kubernetes.io/projected/f5b81937-96f5-42e2-b937-ab11c79ff3d0-kube-api-access-g2wzp\") pod \"nova-scheduler-0\" (UID: \"f5b81937-96f5-42e2-b937-ab11c79ff3d0\") " pod="openstack/nova-scheduler-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.097102 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.151949 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.592515 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.686490 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:56:50 crc kubenswrapper[4932]: W0218 19:56:50.691437 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1c40f97_715a_4ff5_a0f3_1c31cb982552.slice/crio-7e215cfef40e726d4ff650e0ce5408e506726966fe01fac3aec54fc5c083ab80 WatchSource:0}: Error finding container 7e215cfef40e726d4ff650e0ce5408e506726966fe01fac3aec54fc5c083ab80: Status 404 returned error can't find the container with id 7e215cfef40e726d4ff650e0ce5408e506726966fe01fac3aec54fc5c083ab80 Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.706509 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"99f0bb69-5596-4997-b53f-9ceb9aa7cac1","Type":"ContainerStarted","Data":"ac9cdaf20bd3031a876c24c3a0e0b7388e76ee75a82ad5892cfd63d1046f7239"} Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.706552 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"99f0bb69-5596-4997-b53f-9ceb9aa7cac1","Type":"ContainerStarted","Data":"cfe37401ae7093b7931bae875722005994cdb5c2fb9e591c12bc3ca30f3abc4f"} Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.708821 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e1c40f97-715a-4ff5-a0f3-1c31cb982552","Type":"ContainerStarted","Data":"7e215cfef40e726d4ff650e0ce5408e506726966fe01fac3aec54fc5c083ab80"} Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.710545 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f5b81937-96f5-42e2-b937-ab11c79ff3d0","Type":"ContainerStarted","Data":"c802a8f843349e7a91e1c54efa3ea6e2da76e22c0517d4146e5ed79e8aa95cf9"} Feb 18 19:56:50 crc kubenswrapper[4932]: I0218 19:56:50.728021 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.7280037249999998 podStartE2EDuration="2.728003725s" podCreationTimestamp="2026-02-18 19:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:56:50.72300724 +0000 UTC m=+1374.304962105" watchObservedRunningTime="2026-02-18 19:56:50.728003725 +0000 UTC m=+1374.309958570" Feb 18 19:56:51 crc kubenswrapper[4932]: I0218 19:56:51.193906 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73e788d9-865f-453f-bdca-1de3b96af3e7" path="/var/lib/kubelet/pods/73e788d9-865f-453f-bdca-1de3b96af3e7/volumes" Feb 18 19:56:51 crc kubenswrapper[4932]: I0218 19:56:51.194631 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f60cae27-f16b-4874-800f-f94fc2ce849f" path="/var/lib/kubelet/pods/f60cae27-f16b-4874-800f-f94fc2ce849f/volumes" Feb 18 19:56:51 crc kubenswrapper[4932]: I0218 19:56:51.723445 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f5b81937-96f5-42e2-b937-ab11c79ff3d0","Type":"ContainerStarted","Data":"09f93a301ca4f462a1ffbd48b85ef5252a8763eadc40b8eba013b0e6730682c9"} Feb 18 19:56:51 crc kubenswrapper[4932]: I0218 19:56:51.729315 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e1c40f97-715a-4ff5-a0f3-1c31cb982552","Type":"ContainerStarted","Data":"c6d7764577ef61c1aacdb6be7f0f7af75476eca669f0bc547da31d8a799aa0e0"} Feb 18 19:56:51 crc kubenswrapper[4932]: I0218 19:56:51.729366 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e1c40f97-715a-4ff5-a0f3-1c31cb982552","Type":"ContainerStarted","Data":"a64493fc3a17893cd6129b3f06f37dd6a0a8a196133064d219d4f9b4be075060"} Feb 18 19:56:51 crc kubenswrapper[4932]: I0218 19:56:51.744582 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.744560184 podStartE2EDuration="2.744560184s" podCreationTimestamp="2026-02-18 19:56:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:56:51.74035769 +0000 UTC m=+1375.322312535" watchObservedRunningTime="2026-02-18 19:56:51.744560184 +0000 UTC m=+1375.326515029" Feb 18 19:56:51 crc kubenswrapper[4932]: I0218 19:56:51.766069 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.76604867 podStartE2EDuration="2.76604867s" podCreationTimestamp="2026-02-18 19:56:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:56:51.763556788 +0000 UTC m=+1375.345511633" watchObservedRunningTime="2026-02-18 19:56:51.76604867 +0000 UTC m=+1375.348003515" Feb 18 19:56:55 crc kubenswrapper[4932]: I0218 19:56:55.098305 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 19:56:55 crc kubenswrapper[4932]: I0218 19:56:55.152057 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 19:56:55 crc kubenswrapper[4932]: I0218 19:56:55.152107 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 19:56:59 crc kubenswrapper[4932]: I0218 19:56:59.139672 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:56:59 crc kubenswrapper[4932]: I0218 19:56:59.140305 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:57:00 crc kubenswrapper[4932]: I0218 19:57:00.101449 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 19:57:00 crc kubenswrapper[4932]: I0218 19:57:00.143589 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 19:57:00 crc kubenswrapper[4932]: I0218 19:57:00.152192 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 19:57:00 crc kubenswrapper[4932]: I0218 19:57:00.152385 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 19:57:00 crc kubenswrapper[4932]: I0218 19:57:00.222475 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="99f0bb69-5596-4997-b53f-9ceb9aa7cac1" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.223:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:57:00 crc kubenswrapper[4932]: I0218 19:57:00.222524 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="99f0bb69-5596-4997-b53f-9ceb9aa7cac1" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.223:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:57:00 crc kubenswrapper[4932]: I0218 19:57:00.896932 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 19:57:01 crc kubenswrapper[4932]: I0218 19:57:01.164331 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e1c40f97-715a-4ff5-a0f3-1c31cb982552" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.225:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:57:01 crc kubenswrapper[4932]: I0218 19:57:01.164363 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e1c40f97-715a-4ff5-a0f3-1c31cb982552" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.225:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.145533 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.146136 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.146442 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.146464 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.154481 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.155441 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.392619 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c95b7c697-ptvr7"] Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.394526 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.420646 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c95b7c697-ptvr7"] Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.543708 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-dns-svc\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.543787 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-dns-swift-storage-0\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.543821 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-config\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.543903 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-ovsdbserver-nb\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.543931 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-ovsdbserver-sb\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.543995 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbxnb\" (UniqueName: \"kubernetes.io/projected/f91611fc-84cb-4a52-8943-b4a5c7481f45-kube-api-access-vbxnb\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.645607 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-ovsdbserver-nb\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.645705 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-ovsdbserver-sb\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.645810 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbxnb\" (UniqueName: \"kubernetes.io/projected/f91611fc-84cb-4a52-8943-b4a5c7481f45-kube-api-access-vbxnb\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.645843 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-dns-svc\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.645898 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-dns-swift-storage-0\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.645934 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-config\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.646818 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-ovsdbserver-nb\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.647113 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-config\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.647373 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-ovsdbserver-sb\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.647437 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-dns-swift-storage-0\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.647950 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-dns-svc\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.676341 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbxnb\" (UniqueName: \"kubernetes.io/projected/f91611fc-84cb-4a52-8943-b4a5c7481f45-kube-api-access-vbxnb\") pod \"dnsmasq-dns-7c95b7c697-ptvr7\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:09 crc kubenswrapper[4932]: I0218 19:57:09.719110 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:10 crc kubenswrapper[4932]: I0218 19:57:10.192585 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 19:57:10 crc kubenswrapper[4932]: I0218 19:57:10.238579 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 19:57:10 crc kubenswrapper[4932]: I0218 19:57:10.243525 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 19:57:10 crc kubenswrapper[4932]: I0218 19:57:10.275243 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c95b7c697-ptvr7"] Feb 18 19:57:10 crc kubenswrapper[4932]: I0218 19:57:10.946249 4932 generic.go:334] "Generic (PLEG): container finished" podID="f91611fc-84cb-4a52-8943-b4a5c7481f45" containerID="9419bf8be8f2a37b3cf214bf296de6889682ce2fc984ace13ff343025ac91c6e" exitCode=0 Feb 18 19:57:10 crc kubenswrapper[4932]: I0218 19:57:10.947718 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" event={"ID":"f91611fc-84cb-4a52-8943-b4a5c7481f45","Type":"ContainerDied","Data":"9419bf8be8f2a37b3cf214bf296de6889682ce2fc984ace13ff343025ac91c6e"} Feb 18 19:57:10 crc kubenswrapper[4932]: I0218 19:57:10.947754 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" event={"ID":"f91611fc-84cb-4a52-8943-b4a5c7481f45","Type":"ContainerStarted","Data":"655d5fb141738aad0155e62442b9035066c7a9ec2985b3b96a40dbf2d8892c36"} Feb 18 19:57:10 crc kubenswrapper[4932]: I0218 19:57:10.966933 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 19:57:11 crc kubenswrapper[4932]: I0218 19:57:11.965808 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" event={"ID":"f91611fc-84cb-4a52-8943-b4a5c7481f45","Type":"ContainerStarted","Data":"6f7dae99fe8307a44c04e49e378446558a816a80863fddd37088532c2f9fd632"} Feb 18 19:57:11 crc kubenswrapper[4932]: I0218 19:57:11.966140 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:12 crc kubenswrapper[4932]: I0218 19:57:12.026995 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" podStartSLOduration=3.026971217 podStartE2EDuration="3.026971217s" podCreationTimestamp="2026-02-18 19:57:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:57:12.006317672 +0000 UTC m=+1395.588272517" watchObservedRunningTime="2026-02-18 19:57:12.026971217 +0000 UTC m=+1395.608926052" Feb 18 19:57:12 crc kubenswrapper[4932]: I0218 19:57:12.218751 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:57:12 crc kubenswrapper[4932]: I0218 19:57:12.219045 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerName="ceilometer-central-agent" containerID="cri-o://d722c93032aba6113b113eeb8354bc70c1d209a0eebc1c1a06a6503b3551ccea" gracePeriod=30 Feb 18 19:57:12 crc kubenswrapper[4932]: I0218 19:57:12.219116 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerName="proxy-httpd" containerID="cri-o://c4fa2a5b1772ec58a7601e53c0ae5987c2f3fd3000f703ae1787ee43bf85cc39" gracePeriod=30 Feb 18 19:57:12 crc kubenswrapper[4932]: I0218 19:57:12.219186 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerName="ceilometer-notification-agent" containerID="cri-o://beafe51f72187b97ef3ab0bafd91804056c1481e3448c26a73646bd8e29f2504" gracePeriod=30 Feb 18 19:57:12 crc kubenswrapper[4932]: I0218 19:57:12.219151 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerName="sg-core" containerID="cri-o://89bbe6e7ca3b99944003d5007b9764aa09286a2f27274b6cbc2ca9273e89b24c" gracePeriod=30 Feb 18 19:57:12 crc kubenswrapper[4932]: I0218 19:57:12.273261 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:57:12 crc kubenswrapper[4932]: I0218 19:57:12.273508 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="99f0bb69-5596-4997-b53f-9ceb9aa7cac1" containerName="nova-api-log" containerID="cri-o://cfe37401ae7093b7931bae875722005994cdb5c2fb9e591c12bc3ca30f3abc4f" gracePeriod=30 Feb 18 19:57:12 crc kubenswrapper[4932]: I0218 19:57:12.273850 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="99f0bb69-5596-4997-b53f-9ceb9aa7cac1" containerName="nova-api-api" containerID="cri-o://ac9cdaf20bd3031a876c24c3a0e0b7388e76ee75a82ad5892cfd63d1046f7239" gracePeriod=30 Feb 18 19:57:12 crc kubenswrapper[4932]: I0218 19:57:12.974715 4932 generic.go:334] "Generic (PLEG): container finished" podID="99f0bb69-5596-4997-b53f-9ceb9aa7cac1" containerID="cfe37401ae7093b7931bae875722005994cdb5c2fb9e591c12bc3ca30f3abc4f" exitCode=143 Feb 18 19:57:12 crc kubenswrapper[4932]: I0218 19:57:12.974790 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"99f0bb69-5596-4997-b53f-9ceb9aa7cac1","Type":"ContainerDied","Data":"cfe37401ae7093b7931bae875722005994cdb5c2fb9e591c12bc3ca30f3abc4f"} Feb 18 19:57:12 crc kubenswrapper[4932]: I0218 19:57:12.976902 4932 generic.go:334] "Generic (PLEG): container finished" podID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerID="c4fa2a5b1772ec58a7601e53c0ae5987c2f3fd3000f703ae1787ee43bf85cc39" exitCode=0 Feb 18 19:57:12 crc kubenswrapper[4932]: I0218 19:57:12.976973 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07d7be76-f5d6-4280-8009-01c1db25ee6e","Type":"ContainerDied","Data":"c4fa2a5b1772ec58a7601e53c0ae5987c2f3fd3000f703ae1787ee43bf85cc39"} Feb 18 19:57:12 crc kubenswrapper[4932]: I0218 19:57:12.977010 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07d7be76-f5d6-4280-8009-01c1db25ee6e","Type":"ContainerDied","Data":"89bbe6e7ca3b99944003d5007b9764aa09286a2f27274b6cbc2ca9273e89b24c"} Feb 18 19:57:12 crc kubenswrapper[4932]: I0218 19:57:12.976980 4932 generic.go:334] "Generic (PLEG): container finished" podID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerID="89bbe6e7ca3b99944003d5007b9764aa09286a2f27274b6cbc2ca9273e89b24c" exitCode=2 Feb 18 19:57:12 crc kubenswrapper[4932]: I0218 19:57:12.977028 4932 generic.go:334] "Generic (PLEG): container finished" podID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerID="d722c93032aba6113b113eeb8354bc70c1d209a0eebc1c1a06a6503b3551ccea" exitCode=0 Feb 18 19:57:12 crc kubenswrapper[4932]: I0218 19:57:12.977143 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07d7be76-f5d6-4280-8009-01c1db25ee6e","Type":"ContainerDied","Data":"d722c93032aba6113b113eeb8354bc70c1d209a0eebc1c1a06a6503b3551ccea"} Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.461777 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.561681 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqmmj\" (UniqueName: \"kubernetes.io/projected/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-kube-api-access-xqmmj\") pod \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\" (UID: \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\") " Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.561737 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-combined-ca-bundle\") pod \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\" (UID: \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\") " Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.561845 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-config-data\") pod \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\" (UID: \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\") " Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.561959 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-logs\") pod \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\" (UID: \"99f0bb69-5596-4997-b53f-9ceb9aa7cac1\") " Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.562868 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-logs" (OuterVolumeSpecName: "logs") pod "99f0bb69-5596-4997-b53f-9ceb9aa7cac1" (UID: "99f0bb69-5596-4997-b53f-9ceb9aa7cac1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.571295 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-kube-api-access-xqmmj" (OuterVolumeSpecName: "kube-api-access-xqmmj") pod "99f0bb69-5596-4997-b53f-9ceb9aa7cac1" (UID: "99f0bb69-5596-4997-b53f-9ceb9aa7cac1"). InnerVolumeSpecName "kube-api-access-xqmmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.604801 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-config-data" (OuterVolumeSpecName: "config-data") pod "99f0bb69-5596-4997-b53f-9ceb9aa7cac1" (UID: "99f0bb69-5596-4997-b53f-9ceb9aa7cac1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.608830 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "99f0bb69-5596-4997-b53f-9ceb9aa7cac1" (UID: "99f0bb69-5596-4997-b53f-9ceb9aa7cac1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.664463 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqmmj\" (UniqueName: \"kubernetes.io/projected/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-kube-api-access-xqmmj\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.664519 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.664531 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.664548 4932 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99f0bb69-5596-4997-b53f-9ceb9aa7cac1-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.997847 4932 generic.go:334] "Generic (PLEG): container finished" podID="99f0bb69-5596-4997-b53f-9ceb9aa7cac1" containerID="ac9cdaf20bd3031a876c24c3a0e0b7388e76ee75a82ad5892cfd63d1046f7239" exitCode=0 Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.997924 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"99f0bb69-5596-4997-b53f-9ceb9aa7cac1","Type":"ContainerDied","Data":"ac9cdaf20bd3031a876c24c3a0e0b7388e76ee75a82ad5892cfd63d1046f7239"} Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.998263 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"99f0bb69-5596-4997-b53f-9ceb9aa7cac1","Type":"ContainerDied","Data":"d44a0859e6bc1ca146456cd319c226c1c97e6918ba7cf2e5b3fea2ceb5f507ac"} Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.997952 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:57:14 crc kubenswrapper[4932]: I0218 19:57:14.998330 4932 scope.go:117] "RemoveContainer" containerID="ac9cdaf20bd3031a876c24c3a0e0b7388e76ee75a82ad5892cfd63d1046f7239" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.040439 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.047939 4932 scope.go:117] "RemoveContainer" containerID="cfe37401ae7093b7931bae875722005994cdb5c2fb9e591c12bc3ca30f3abc4f" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.053797 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.070721 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 19:57:15 crc kubenswrapper[4932]: E0218 19:57:15.071254 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99f0bb69-5596-4997-b53f-9ceb9aa7cac1" containerName="nova-api-log" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.071276 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="99f0bb69-5596-4997-b53f-9ceb9aa7cac1" containerName="nova-api-log" Feb 18 19:57:15 crc kubenswrapper[4932]: E0218 19:57:15.071299 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99f0bb69-5596-4997-b53f-9ceb9aa7cac1" containerName="nova-api-api" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.071308 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="99f0bb69-5596-4997-b53f-9ceb9aa7cac1" containerName="nova-api-api" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.071587 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="99f0bb69-5596-4997-b53f-9ceb9aa7cac1" containerName="nova-api-log" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.071622 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="99f0bb69-5596-4997-b53f-9ceb9aa7cac1" containerName="nova-api-api" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.072932 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.075317 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.075527 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.076028 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.080549 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.088696 4932 scope.go:117] "RemoveContainer" containerID="ac9cdaf20bd3031a876c24c3a0e0b7388e76ee75a82ad5892cfd63d1046f7239" Feb 18 19:57:15 crc kubenswrapper[4932]: E0218 19:57:15.091278 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac9cdaf20bd3031a876c24c3a0e0b7388e76ee75a82ad5892cfd63d1046f7239\": container with ID starting with ac9cdaf20bd3031a876c24c3a0e0b7388e76ee75a82ad5892cfd63d1046f7239 not found: ID does not exist" containerID="ac9cdaf20bd3031a876c24c3a0e0b7388e76ee75a82ad5892cfd63d1046f7239" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.091326 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac9cdaf20bd3031a876c24c3a0e0b7388e76ee75a82ad5892cfd63d1046f7239"} err="failed to get container status \"ac9cdaf20bd3031a876c24c3a0e0b7388e76ee75a82ad5892cfd63d1046f7239\": rpc error: code = NotFound desc = could not find container \"ac9cdaf20bd3031a876c24c3a0e0b7388e76ee75a82ad5892cfd63d1046f7239\": container with ID starting with ac9cdaf20bd3031a876c24c3a0e0b7388e76ee75a82ad5892cfd63d1046f7239 not found: ID does not exist" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.091353 4932 scope.go:117] "RemoveContainer" containerID="cfe37401ae7093b7931bae875722005994cdb5c2fb9e591c12bc3ca30f3abc4f" Feb 18 19:57:15 crc kubenswrapper[4932]: E0218 19:57:15.091669 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfe37401ae7093b7931bae875722005994cdb5c2fb9e591c12bc3ca30f3abc4f\": container with ID starting with cfe37401ae7093b7931bae875722005994cdb5c2fb9e591c12bc3ca30f3abc4f not found: ID does not exist" containerID="cfe37401ae7093b7931bae875722005994cdb5c2fb9e591c12bc3ca30f3abc4f" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.091698 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfe37401ae7093b7931bae875722005994cdb5c2fb9e591c12bc3ca30f3abc4f"} err="failed to get container status \"cfe37401ae7093b7931bae875722005994cdb5c2fb9e591c12bc3ca30f3abc4f\": rpc error: code = NotFound desc = could not find container \"cfe37401ae7093b7931bae875722005994cdb5c2fb9e591c12bc3ca30f3abc4f\": container with ID starting with cfe37401ae7093b7931bae875722005994cdb5c2fb9e591c12bc3ca30f3abc4f not found: ID does not exist" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.174467 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd6q2\" (UniqueName: \"kubernetes.io/projected/d6624ec2-3d16-4050-a368-9f196157bbf5-kube-api-access-hd6q2\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.174778 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6624ec2-3d16-4050-a368-9f196157bbf5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.174878 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6624ec2-3d16-4050-a368-9f196157bbf5-logs\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.174950 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6624ec2-3d16-4050-a368-9f196157bbf5-public-tls-certs\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.175047 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6624ec2-3d16-4050-a368-9f196157bbf5-config-data\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.175089 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6624ec2-3d16-4050-a368-9f196157bbf5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.190771 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99f0bb69-5596-4997-b53f-9ceb9aa7cac1" path="/var/lib/kubelet/pods/99f0bb69-5596-4997-b53f-9ceb9aa7cac1/volumes" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.276605 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6624ec2-3d16-4050-a368-9f196157bbf5-config-data\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.276666 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6624ec2-3d16-4050-a368-9f196157bbf5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.276744 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd6q2\" (UniqueName: \"kubernetes.io/projected/d6624ec2-3d16-4050-a368-9f196157bbf5-kube-api-access-hd6q2\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.276776 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6624ec2-3d16-4050-a368-9f196157bbf5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.276831 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6624ec2-3d16-4050-a368-9f196157bbf5-logs\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.276860 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6624ec2-3d16-4050-a368-9f196157bbf5-public-tls-certs\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.277946 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6624ec2-3d16-4050-a368-9f196157bbf5-logs\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.281968 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6624ec2-3d16-4050-a368-9f196157bbf5-public-tls-certs\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.282009 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6624ec2-3d16-4050-a368-9f196157bbf5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.282057 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6624ec2-3d16-4050-a368-9f196157bbf5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.282641 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6624ec2-3d16-4050-a368-9f196157bbf5-config-data\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.305489 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd6q2\" (UniqueName: \"kubernetes.io/projected/d6624ec2-3d16-4050-a368-9f196157bbf5-kube-api-access-hd6q2\") pod \"nova-api-0\" (UID: \"d6624ec2-3d16-4050-a368-9f196157bbf5\") " pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.444305 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:57:15 crc kubenswrapper[4932]: I0218 19:57:15.902130 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:57:15 crc kubenswrapper[4932]: W0218 19:57:15.914325 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6624ec2_3d16_4050_a368_9f196157bbf5.slice/crio-4bd44fac229ff96f7fa970cf5ab9351e825e31498c1ca0dce65e69bc684dfad0 WatchSource:0}: Error finding container 4bd44fac229ff96f7fa970cf5ab9351e825e31498c1ca0dce65e69bc684dfad0: Status 404 returned error can't find the container with id 4bd44fac229ff96f7fa970cf5ab9351e825e31498c1ca0dce65e69bc684dfad0 Feb 18 19:57:16 crc kubenswrapper[4932]: I0218 19:57:16.010925 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d6624ec2-3d16-4050-a368-9f196157bbf5","Type":"ContainerStarted","Data":"4bd44fac229ff96f7fa970cf5ab9351e825e31498c1ca0dce65e69bc684dfad0"} Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.036637 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d6624ec2-3d16-4050-a368-9f196157bbf5","Type":"ContainerStarted","Data":"db638daddb0b23c5095b633c55576a9da0f22ae163669278c1f1885dd8cfeaa9"} Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.037012 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d6624ec2-3d16-4050-a368-9f196157bbf5","Type":"ContainerStarted","Data":"32fd820869df0e1bb14f5e26cb049c1cb9e0cf510951376c7aa1af5f51cfb5b9"} Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.065385 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.065361004 podStartE2EDuration="2.065361004s" podCreationTimestamp="2026-02-18 19:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:57:17.0560063 +0000 UTC m=+1400.637961155" watchObservedRunningTime="2026-02-18 19:57:17.065361004 +0000 UTC m=+1400.647315859" Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.486281 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.528538 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07d7be76-f5d6-4280-8009-01c1db25ee6e-run-httpd\") pod \"07d7be76-f5d6-4280-8009-01c1db25ee6e\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.528609 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-scripts\") pod \"07d7be76-f5d6-4280-8009-01c1db25ee6e\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.528653 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07d7be76-f5d6-4280-8009-01c1db25ee6e-log-httpd\") pod \"07d7be76-f5d6-4280-8009-01c1db25ee6e\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.528696 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-config-data\") pod \"07d7be76-f5d6-4280-8009-01c1db25ee6e\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.528721 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bvgl\" (UniqueName: \"kubernetes.io/projected/07d7be76-f5d6-4280-8009-01c1db25ee6e-kube-api-access-2bvgl\") pod \"07d7be76-f5d6-4280-8009-01c1db25ee6e\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.528745 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-ceilometer-tls-certs\") pod \"07d7be76-f5d6-4280-8009-01c1db25ee6e\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.528764 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-sg-core-conf-yaml\") pod \"07d7be76-f5d6-4280-8009-01c1db25ee6e\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.528800 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-combined-ca-bundle\") pod \"07d7be76-f5d6-4280-8009-01c1db25ee6e\" (UID: \"07d7be76-f5d6-4280-8009-01c1db25ee6e\") " Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.537780 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07d7be76-f5d6-4280-8009-01c1db25ee6e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "07d7be76-f5d6-4280-8009-01c1db25ee6e" (UID: "07d7be76-f5d6-4280-8009-01c1db25ee6e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.538094 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07d7be76-f5d6-4280-8009-01c1db25ee6e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "07d7be76-f5d6-4280-8009-01c1db25ee6e" (UID: "07d7be76-f5d6-4280-8009-01c1db25ee6e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.546719 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-scripts" (OuterVolumeSpecName: "scripts") pod "07d7be76-f5d6-4280-8009-01c1db25ee6e" (UID: "07d7be76-f5d6-4280-8009-01c1db25ee6e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.554397 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07d7be76-f5d6-4280-8009-01c1db25ee6e-kube-api-access-2bvgl" (OuterVolumeSpecName: "kube-api-access-2bvgl") pod "07d7be76-f5d6-4280-8009-01c1db25ee6e" (UID: "07d7be76-f5d6-4280-8009-01c1db25ee6e"). InnerVolumeSpecName "kube-api-access-2bvgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.620557 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "07d7be76-f5d6-4280-8009-01c1db25ee6e" (UID: "07d7be76-f5d6-4280-8009-01c1db25ee6e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.626247 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "07d7be76-f5d6-4280-8009-01c1db25ee6e" (UID: "07d7be76-f5d6-4280-8009-01c1db25ee6e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.630999 4932 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07d7be76-f5d6-4280-8009-01c1db25ee6e-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.631046 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bvgl\" (UniqueName: \"kubernetes.io/projected/07d7be76-f5d6-4280-8009-01c1db25ee6e-kube-api-access-2bvgl\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.631061 4932 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.631076 4932 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.631087 4932 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07d7be76-f5d6-4280-8009-01c1db25ee6e-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.631102 4932 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.651160 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07d7be76-f5d6-4280-8009-01c1db25ee6e" (UID: "07d7be76-f5d6-4280-8009-01c1db25ee6e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.668291 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-config-data" (OuterVolumeSpecName: "config-data") pod "07d7be76-f5d6-4280-8009-01c1db25ee6e" (UID: "07d7be76-f5d6-4280-8009-01c1db25ee6e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.733387 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:17 crc kubenswrapper[4932]: I0218 19:57:17.733418 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07d7be76-f5d6-4280-8009-01c1db25ee6e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.054574 4932 generic.go:334] "Generic (PLEG): container finished" podID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerID="beafe51f72187b97ef3ab0bafd91804056c1481e3448c26a73646bd8e29f2504" exitCode=0 Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.054705 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.055439 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07d7be76-f5d6-4280-8009-01c1db25ee6e","Type":"ContainerDied","Data":"beafe51f72187b97ef3ab0bafd91804056c1481e3448c26a73646bd8e29f2504"} Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.055500 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07d7be76-f5d6-4280-8009-01c1db25ee6e","Type":"ContainerDied","Data":"bed094a29d5cc735d8b58329a9d581210c267db550c3be7eeb9923193dc084eb"} Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.055522 4932 scope.go:117] "RemoveContainer" containerID="c4fa2a5b1772ec58a7601e53c0ae5987c2f3fd3000f703ae1787ee43bf85cc39" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.093851 4932 scope.go:117] "RemoveContainer" containerID="89bbe6e7ca3b99944003d5007b9764aa09286a2f27274b6cbc2ca9273e89b24c" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.113074 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.136330 4932 scope.go:117] "RemoveContainer" containerID="beafe51f72187b97ef3ab0bafd91804056c1481e3448c26a73646bd8e29f2504" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.139270 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.150885 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:57:18 crc kubenswrapper[4932]: E0218 19:57:18.151733 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerName="sg-core" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.151759 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerName="sg-core" Feb 18 19:57:18 crc kubenswrapper[4932]: E0218 19:57:18.151805 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerName="ceilometer-central-agent" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.151815 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerName="ceilometer-central-agent" Feb 18 19:57:18 crc kubenswrapper[4932]: E0218 19:57:18.151829 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerName="ceilometer-notification-agent" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.151836 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerName="ceilometer-notification-agent" Feb 18 19:57:18 crc kubenswrapper[4932]: E0218 19:57:18.151869 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerName="proxy-httpd" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.151878 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerName="proxy-httpd" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.152119 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerName="proxy-httpd" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.152153 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerName="ceilometer-notification-agent" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.152201 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerName="sg-core" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.152215 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="07d7be76-f5d6-4280-8009-01c1db25ee6e" containerName="ceilometer-central-agent" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.156541 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.159157 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.159813 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.160068 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.167404 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.180287 4932 scope.go:117] "RemoveContainer" containerID="d722c93032aba6113b113eeb8354bc70c1d209a0eebc1c1a06a6503b3551ccea" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.205151 4932 scope.go:117] "RemoveContainer" containerID="c4fa2a5b1772ec58a7601e53c0ae5987c2f3fd3000f703ae1787ee43bf85cc39" Feb 18 19:57:18 crc kubenswrapper[4932]: E0218 19:57:18.205549 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4fa2a5b1772ec58a7601e53c0ae5987c2f3fd3000f703ae1787ee43bf85cc39\": container with ID starting with c4fa2a5b1772ec58a7601e53c0ae5987c2f3fd3000f703ae1787ee43bf85cc39 not found: ID does not exist" containerID="c4fa2a5b1772ec58a7601e53c0ae5987c2f3fd3000f703ae1787ee43bf85cc39" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.205579 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4fa2a5b1772ec58a7601e53c0ae5987c2f3fd3000f703ae1787ee43bf85cc39"} err="failed to get container status \"c4fa2a5b1772ec58a7601e53c0ae5987c2f3fd3000f703ae1787ee43bf85cc39\": rpc error: code = NotFound desc = could not find container \"c4fa2a5b1772ec58a7601e53c0ae5987c2f3fd3000f703ae1787ee43bf85cc39\": container with ID starting with c4fa2a5b1772ec58a7601e53c0ae5987c2f3fd3000f703ae1787ee43bf85cc39 not found: ID does not exist" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.205604 4932 scope.go:117] "RemoveContainer" containerID="89bbe6e7ca3b99944003d5007b9764aa09286a2f27274b6cbc2ca9273e89b24c" Feb 18 19:57:18 crc kubenswrapper[4932]: E0218 19:57:18.205944 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89bbe6e7ca3b99944003d5007b9764aa09286a2f27274b6cbc2ca9273e89b24c\": container with ID starting with 89bbe6e7ca3b99944003d5007b9764aa09286a2f27274b6cbc2ca9273e89b24c not found: ID does not exist" containerID="89bbe6e7ca3b99944003d5007b9764aa09286a2f27274b6cbc2ca9273e89b24c" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.205998 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89bbe6e7ca3b99944003d5007b9764aa09286a2f27274b6cbc2ca9273e89b24c"} err="failed to get container status \"89bbe6e7ca3b99944003d5007b9764aa09286a2f27274b6cbc2ca9273e89b24c\": rpc error: code = NotFound desc = could not find container \"89bbe6e7ca3b99944003d5007b9764aa09286a2f27274b6cbc2ca9273e89b24c\": container with ID starting with 89bbe6e7ca3b99944003d5007b9764aa09286a2f27274b6cbc2ca9273e89b24c not found: ID does not exist" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.206058 4932 scope.go:117] "RemoveContainer" containerID="beafe51f72187b97ef3ab0bafd91804056c1481e3448c26a73646bd8e29f2504" Feb 18 19:57:18 crc kubenswrapper[4932]: E0218 19:57:18.207448 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"beafe51f72187b97ef3ab0bafd91804056c1481e3448c26a73646bd8e29f2504\": container with ID starting with beafe51f72187b97ef3ab0bafd91804056c1481e3448c26a73646bd8e29f2504 not found: ID does not exist" containerID="beafe51f72187b97ef3ab0bafd91804056c1481e3448c26a73646bd8e29f2504" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.207477 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"beafe51f72187b97ef3ab0bafd91804056c1481e3448c26a73646bd8e29f2504"} err="failed to get container status \"beafe51f72187b97ef3ab0bafd91804056c1481e3448c26a73646bd8e29f2504\": rpc error: code = NotFound desc = could not find container \"beafe51f72187b97ef3ab0bafd91804056c1481e3448c26a73646bd8e29f2504\": container with ID starting with beafe51f72187b97ef3ab0bafd91804056c1481e3448c26a73646bd8e29f2504 not found: ID does not exist" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.207495 4932 scope.go:117] "RemoveContainer" containerID="d722c93032aba6113b113eeb8354bc70c1d209a0eebc1c1a06a6503b3551ccea" Feb 18 19:57:18 crc kubenswrapper[4932]: E0218 19:57:18.207794 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d722c93032aba6113b113eeb8354bc70c1d209a0eebc1c1a06a6503b3551ccea\": container with ID starting with d722c93032aba6113b113eeb8354bc70c1d209a0eebc1c1a06a6503b3551ccea not found: ID does not exist" containerID="d722c93032aba6113b113eeb8354bc70c1d209a0eebc1c1a06a6503b3551ccea" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.207841 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d722c93032aba6113b113eeb8354bc70c1d209a0eebc1c1a06a6503b3551ccea"} err="failed to get container status \"d722c93032aba6113b113eeb8354bc70c1d209a0eebc1c1a06a6503b3551ccea\": rpc error: code = NotFound desc = could not find container \"d722c93032aba6113b113eeb8354bc70c1d209a0eebc1c1a06a6503b3551ccea\": container with ID starting with d722c93032aba6113b113eeb8354bc70c1d209a0eebc1c1a06a6503b3551ccea not found: ID does not exist" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.243782 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-log-httpd\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.243899 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsc42\" (UniqueName: \"kubernetes.io/projected/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-kube-api-access-vsc42\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.243956 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-scripts\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.244024 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.244055 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.244115 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-run-httpd\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.244155 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.244198 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-config-data\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.346144 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-scripts\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.346228 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.346266 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.346328 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-run-httpd\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.346379 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.346403 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-config-data\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.346452 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-log-httpd\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.346529 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsc42\" (UniqueName: \"kubernetes.io/projected/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-kube-api-access-vsc42\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.348343 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-log-httpd\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.348506 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-run-httpd\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.352543 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-config-data\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.352894 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.353074 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.353884 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.354966 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-scripts\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.369369 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsc42\" (UniqueName: \"kubernetes.io/projected/7e3d40ff-e417-475c-88e8-ea5adf1f40e6-kube-api-access-vsc42\") pod \"ceilometer-0\" (UID: \"7e3d40ff-e417-475c-88e8-ea5adf1f40e6\") " pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.482658 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:57:18 crc kubenswrapper[4932]: I0218 19:57:18.936794 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.073659 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e3d40ff-e417-475c-88e8-ea5adf1f40e6","Type":"ContainerStarted","Data":"07de8e561d3e4d0d3f051288dd8b7aafc54f8e71086628186d3b560f6ecebdef"} Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.192001 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07d7be76-f5d6-4280-8009-01c1db25ee6e" path="/var/lib/kubelet/pods/07d7be76-f5d6-4280-8009-01c1db25ee6e/volumes" Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.472949 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nvplf"] Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.475019 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nvplf" Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.499335 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nvplf"] Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.571666 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-utilities\") pod \"redhat-operators-nvplf\" (UID: \"cdcdbe71-7ce4-4038-b13a-345f14b7a80d\") " pod="openshift-marketplace/redhat-operators-nvplf" Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.571923 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-catalog-content\") pod \"redhat-operators-nvplf\" (UID: \"cdcdbe71-7ce4-4038-b13a-345f14b7a80d\") " pod="openshift-marketplace/redhat-operators-nvplf" Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.572077 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npzvf\" (UniqueName: \"kubernetes.io/projected/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-kube-api-access-npzvf\") pod \"redhat-operators-nvplf\" (UID: \"cdcdbe71-7ce4-4038-b13a-345f14b7a80d\") " pod="openshift-marketplace/redhat-operators-nvplf" Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.673920 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npzvf\" (UniqueName: \"kubernetes.io/projected/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-kube-api-access-npzvf\") pod \"redhat-operators-nvplf\" (UID: \"cdcdbe71-7ce4-4038-b13a-345f14b7a80d\") " pod="openshift-marketplace/redhat-operators-nvplf" Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.674001 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-utilities\") pod \"redhat-operators-nvplf\" (UID: \"cdcdbe71-7ce4-4038-b13a-345f14b7a80d\") " pod="openshift-marketplace/redhat-operators-nvplf" Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.674024 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-catalog-content\") pod \"redhat-operators-nvplf\" (UID: \"cdcdbe71-7ce4-4038-b13a-345f14b7a80d\") " pod="openshift-marketplace/redhat-operators-nvplf" Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.674529 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-utilities\") pod \"redhat-operators-nvplf\" (UID: \"cdcdbe71-7ce4-4038-b13a-345f14b7a80d\") " pod="openshift-marketplace/redhat-operators-nvplf" Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.674598 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-catalog-content\") pod \"redhat-operators-nvplf\" (UID: \"cdcdbe71-7ce4-4038-b13a-345f14b7a80d\") " pod="openshift-marketplace/redhat-operators-nvplf" Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.699101 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npzvf\" (UniqueName: \"kubernetes.io/projected/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-kube-api-access-npzvf\") pod \"redhat-operators-nvplf\" (UID: \"cdcdbe71-7ce4-4038-b13a-345f14b7a80d\") " pod="openshift-marketplace/redhat-operators-nvplf" Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.720357 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.797382 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-87f66f8bf-sszng"] Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.797648 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-87f66f8bf-sszng" podUID="c89ff872-244d-428a-a29c-3b9adeae5c0c" containerName="dnsmasq-dns" containerID="cri-o://3c2dd4d6051f054d8ec462a813d59a2da849d9297e15a4c7e5cbe0de8d6eca93" gracePeriod=10 Feb 18 19:57:19 crc kubenswrapper[4932]: I0218 19:57:19.810343 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nvplf" Feb 18 19:57:20 crc kubenswrapper[4932]: I0218 19:57:20.116554 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e3d40ff-e417-475c-88e8-ea5adf1f40e6","Type":"ContainerStarted","Data":"f0812891d869584a4693f01a79121c669aa1fd2d2a4417194a4f35894b947583"} Feb 18 19:57:20 crc kubenswrapper[4932]: I0218 19:57:20.116853 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e3d40ff-e417-475c-88e8-ea5adf1f40e6","Type":"ContainerStarted","Data":"b2056538af318016b3a43ddb182a1dc99ac70f398cb30d801529059a8962c269"} Feb 18 19:57:20 crc kubenswrapper[4932]: I0218 19:57:20.125485 4932 generic.go:334] "Generic (PLEG): container finished" podID="c89ff872-244d-428a-a29c-3b9adeae5c0c" containerID="3c2dd4d6051f054d8ec462a813d59a2da849d9297e15a4c7e5cbe0de8d6eca93" exitCode=0 Feb 18 19:57:20 crc kubenswrapper[4932]: I0218 19:57:20.125530 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-87f66f8bf-sszng" event={"ID":"c89ff872-244d-428a-a29c-3b9adeae5c0c","Type":"ContainerDied","Data":"3c2dd4d6051f054d8ec462a813d59a2da849d9297e15a4c7e5cbe0de8d6eca93"} Feb 18 19:57:20 crc kubenswrapper[4932]: I0218 19:57:20.403627 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nvplf"] Feb 18 19:57:20 crc kubenswrapper[4932]: I0218 19:57:20.699485 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:57:20 crc kubenswrapper[4932]: I0218 19:57:20.901811 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-ovsdbserver-sb\") pod \"c89ff872-244d-428a-a29c-3b9adeae5c0c\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " Feb 18 19:57:20 crc kubenswrapper[4932]: I0218 19:57:20.902095 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-dns-svc\") pod \"c89ff872-244d-428a-a29c-3b9adeae5c0c\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " Feb 18 19:57:20 crc kubenswrapper[4932]: I0218 19:57:20.902299 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-dns-swift-storage-0\") pod \"c89ff872-244d-428a-a29c-3b9adeae5c0c\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " Feb 18 19:57:20 crc kubenswrapper[4932]: I0218 19:57:20.902318 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-ovsdbserver-nb\") pod \"c89ff872-244d-428a-a29c-3b9adeae5c0c\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " Feb 18 19:57:20 crc kubenswrapper[4932]: I0218 19:57:20.902383 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjq9z\" (UniqueName: \"kubernetes.io/projected/c89ff872-244d-428a-a29c-3b9adeae5c0c-kube-api-access-cjq9z\") pod \"c89ff872-244d-428a-a29c-3b9adeae5c0c\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " Feb 18 19:57:20 crc kubenswrapper[4932]: I0218 19:57:20.902399 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-config\") pod \"c89ff872-244d-428a-a29c-3b9adeae5c0c\" (UID: \"c89ff872-244d-428a-a29c-3b9adeae5c0c\") " Feb 18 19:57:20 crc kubenswrapper[4932]: I0218 19:57:20.919529 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c89ff872-244d-428a-a29c-3b9adeae5c0c-kube-api-access-cjq9z" (OuterVolumeSpecName: "kube-api-access-cjq9z") pod "c89ff872-244d-428a-a29c-3b9adeae5c0c" (UID: "c89ff872-244d-428a-a29c-3b9adeae5c0c"). InnerVolumeSpecName "kube-api-access-cjq9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:57:20 crc kubenswrapper[4932]: I0218 19:57:20.971685 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c89ff872-244d-428a-a29c-3b9adeae5c0c" (UID: "c89ff872-244d-428a-a29c-3b9adeae5c0c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:57:20 crc kubenswrapper[4932]: I0218 19:57:20.992979 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c89ff872-244d-428a-a29c-3b9adeae5c0c" (UID: "c89ff872-244d-428a-a29c-3b9adeae5c0c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:57:20 crc kubenswrapper[4932]: I0218 19:57:20.995624 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-config" (OuterVolumeSpecName: "config") pod "c89ff872-244d-428a-a29c-3b9adeae5c0c" (UID: "c89ff872-244d-428a-a29c-3b9adeae5c0c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:57:20 crc kubenswrapper[4932]: I0218 19:57:20.999869 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c89ff872-244d-428a-a29c-3b9adeae5c0c" (UID: "c89ff872-244d-428a-a29c-3b9adeae5c0c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:57:21 crc kubenswrapper[4932]: I0218 19:57:21.005750 4932 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:21 crc kubenswrapper[4932]: I0218 19:57:21.005820 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:21 crc kubenswrapper[4932]: I0218 19:57:21.005832 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjq9z\" (UniqueName: \"kubernetes.io/projected/c89ff872-244d-428a-a29c-3b9adeae5c0c-kube-api-access-cjq9z\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:21 crc kubenswrapper[4932]: I0218 19:57:21.005846 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:21 crc kubenswrapper[4932]: I0218 19:57:21.005855 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:21 crc kubenswrapper[4932]: I0218 19:57:21.016646 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c89ff872-244d-428a-a29c-3b9adeae5c0c" (UID: "c89ff872-244d-428a-a29c-3b9adeae5c0c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:57:21 crc kubenswrapper[4932]: I0218 19:57:21.108073 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c89ff872-244d-428a-a29c-3b9adeae5c0c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:21 crc kubenswrapper[4932]: I0218 19:57:21.135537 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e3d40ff-e417-475c-88e8-ea5adf1f40e6","Type":"ContainerStarted","Data":"9dc865bc159b7f3f1d0586bafba309d1af9f0c9a279fb487f92307d5be96c487"} Feb 18 19:57:21 crc kubenswrapper[4932]: I0218 19:57:21.137450 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-87f66f8bf-sszng" event={"ID":"c89ff872-244d-428a-a29c-3b9adeae5c0c","Type":"ContainerDied","Data":"3a5bcecade0b5dff94560cc8f3a4637b00cd9cdde3e3372019fd257bdc54822e"} Feb 18 19:57:21 crc kubenswrapper[4932]: I0218 19:57:21.137471 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-87f66f8bf-sszng" Feb 18 19:57:21 crc kubenswrapper[4932]: I0218 19:57:21.137508 4932 scope.go:117] "RemoveContainer" containerID="3c2dd4d6051f054d8ec462a813d59a2da849d9297e15a4c7e5cbe0de8d6eca93" Feb 18 19:57:21 crc kubenswrapper[4932]: I0218 19:57:21.138968 4932 generic.go:334] "Generic (PLEG): container finished" podID="cdcdbe71-7ce4-4038-b13a-345f14b7a80d" containerID="6d8533cef25116c5f8b7055450fa56ca2cd04d828ddbd49de9ee4a7c3d56b99a" exitCode=0 Feb 18 19:57:21 crc kubenswrapper[4932]: I0218 19:57:21.139014 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvplf" event={"ID":"cdcdbe71-7ce4-4038-b13a-345f14b7a80d","Type":"ContainerDied","Data":"6d8533cef25116c5f8b7055450fa56ca2cd04d828ddbd49de9ee4a7c3d56b99a"} Feb 18 19:57:21 crc kubenswrapper[4932]: I0218 19:57:21.139038 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvplf" event={"ID":"cdcdbe71-7ce4-4038-b13a-345f14b7a80d","Type":"ContainerStarted","Data":"bd6d1dac6bf3ebca465127b4e668733d6b3eab206b93e857a3ffc9cc951ff030"} Feb 18 19:57:21 crc kubenswrapper[4932]: I0218 19:57:21.182445 4932 scope.go:117] "RemoveContainer" containerID="efa1de95f92b6f71ab718eba81f5146f37d50f46643463b88203e329ebaceb9a" Feb 18 19:57:21 crc kubenswrapper[4932]: I0218 19:57:21.256142 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-87f66f8bf-sszng"] Feb 18 19:57:21 crc kubenswrapper[4932]: I0218 19:57:21.277516 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-87f66f8bf-sszng"] Feb 18 19:57:22 crc kubenswrapper[4932]: I0218 19:57:22.153899 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvplf" event={"ID":"cdcdbe71-7ce4-4038-b13a-345f14b7a80d","Type":"ContainerStarted","Data":"023c78e065bfbea5d860b9d76242fdaf6508e65d3a7b9b6dac29626062f231a8"} Feb 18 19:57:23 crc kubenswrapper[4932]: I0218 19:57:23.168966 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e3d40ff-e417-475c-88e8-ea5adf1f40e6","Type":"ContainerStarted","Data":"261c3f31d70b14a90e3ad6ce964cc3aa48b7447063b6b0bd154dc91874bb7d0e"} Feb 18 19:57:23 crc kubenswrapper[4932]: I0218 19:57:23.169362 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:57:23 crc kubenswrapper[4932]: I0218 19:57:23.200268 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.053498656 podStartE2EDuration="5.200250005s" podCreationTimestamp="2026-02-18 19:57:18 +0000 UTC" firstStartedPulling="2026-02-18 19:57:18.943977658 +0000 UTC m=+1402.525932503" lastFinishedPulling="2026-02-18 19:57:22.090728997 +0000 UTC m=+1405.672683852" observedRunningTime="2026-02-18 19:57:23.190750988 +0000 UTC m=+1406.772705833" watchObservedRunningTime="2026-02-18 19:57:23.200250005 +0000 UTC m=+1406.782204850" Feb 18 19:57:23 crc kubenswrapper[4932]: I0218 19:57:23.202560 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c89ff872-244d-428a-a29c-3b9adeae5c0c" path="/var/lib/kubelet/pods/c89ff872-244d-428a-a29c-3b9adeae5c0c/volumes" Feb 18 19:57:25 crc kubenswrapper[4932]: I0218 19:57:25.191915 4932 generic.go:334] "Generic (PLEG): container finished" podID="cdcdbe71-7ce4-4038-b13a-345f14b7a80d" containerID="023c78e065bfbea5d860b9d76242fdaf6508e65d3a7b9b6dac29626062f231a8" exitCode=0 Feb 18 19:57:25 crc kubenswrapper[4932]: I0218 19:57:25.193438 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvplf" event={"ID":"cdcdbe71-7ce4-4038-b13a-345f14b7a80d","Type":"ContainerDied","Data":"023c78e065bfbea5d860b9d76242fdaf6508e65d3a7b9b6dac29626062f231a8"} Feb 18 19:57:25 crc kubenswrapper[4932]: I0218 19:57:25.428250 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-87f66f8bf-sszng" podUID="c89ff872-244d-428a-a29c-3b9adeae5c0c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.212:5353: i/o timeout" Feb 18 19:57:25 crc kubenswrapper[4932]: I0218 19:57:25.444499 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:57:25 crc kubenswrapper[4932]: I0218 19:57:25.444550 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:57:26 crc kubenswrapper[4932]: I0218 19:57:26.204015 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvplf" event={"ID":"cdcdbe71-7ce4-4038-b13a-345f14b7a80d","Type":"ContainerStarted","Data":"09aacb7c747003e38e25e2bbf7bc0a125089dfaae5ab96a6e01ac3609c03577a"} Feb 18 19:57:26 crc kubenswrapper[4932]: I0218 19:57:26.230202 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nvplf" podStartSLOduration=2.772706014 podStartE2EDuration="7.230156391s" podCreationTimestamp="2026-02-18 19:57:19 +0000 UTC" firstStartedPulling="2026-02-18 19:57:21.142988914 +0000 UTC m=+1404.724943759" lastFinishedPulling="2026-02-18 19:57:25.600439251 +0000 UTC m=+1409.182394136" observedRunningTime="2026-02-18 19:57:26.227500875 +0000 UTC m=+1409.809455720" watchObservedRunningTime="2026-02-18 19:57:26.230156391 +0000 UTC m=+1409.812111236" Feb 18 19:57:26 crc kubenswrapper[4932]: I0218 19:57:26.464464 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d6624ec2-3d16-4050-a368-9f196157bbf5" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.227:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 19:57:26 crc kubenswrapper[4932]: I0218 19:57:26.464479 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d6624ec2-3d16-4050-a368-9f196157bbf5" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.227:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 19:57:28 crc kubenswrapper[4932]: I0218 19:57:28.791560 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wkrfs"] Feb 18 19:57:28 crc kubenswrapper[4932]: E0218 19:57:28.792314 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c89ff872-244d-428a-a29c-3b9adeae5c0c" containerName="init" Feb 18 19:57:28 crc kubenswrapper[4932]: I0218 19:57:28.792327 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="c89ff872-244d-428a-a29c-3b9adeae5c0c" containerName="init" Feb 18 19:57:28 crc kubenswrapper[4932]: E0218 19:57:28.792356 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c89ff872-244d-428a-a29c-3b9adeae5c0c" containerName="dnsmasq-dns" Feb 18 19:57:28 crc kubenswrapper[4932]: I0218 19:57:28.792362 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="c89ff872-244d-428a-a29c-3b9adeae5c0c" containerName="dnsmasq-dns" Feb 18 19:57:28 crc kubenswrapper[4932]: I0218 19:57:28.792531 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="c89ff872-244d-428a-a29c-3b9adeae5c0c" containerName="dnsmasq-dns" Feb 18 19:57:28 crc kubenswrapper[4932]: I0218 19:57:28.794260 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wkrfs" Feb 18 19:57:28 crc kubenswrapper[4932]: I0218 19:57:28.808459 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wkrfs"] Feb 18 19:57:28 crc kubenswrapper[4932]: I0218 19:57:28.862281 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tks82\" (UniqueName: \"kubernetes.io/projected/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-kube-api-access-tks82\") pod \"community-operators-wkrfs\" (UID: \"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a\") " pod="openshift-marketplace/community-operators-wkrfs" Feb 18 19:57:28 crc kubenswrapper[4932]: I0218 19:57:28.862388 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-catalog-content\") pod \"community-operators-wkrfs\" (UID: \"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a\") " pod="openshift-marketplace/community-operators-wkrfs" Feb 18 19:57:28 crc kubenswrapper[4932]: I0218 19:57:28.862659 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-utilities\") pod \"community-operators-wkrfs\" (UID: \"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a\") " pod="openshift-marketplace/community-operators-wkrfs" Feb 18 19:57:28 crc kubenswrapper[4932]: I0218 19:57:28.964866 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-utilities\") pod \"community-operators-wkrfs\" (UID: \"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a\") " pod="openshift-marketplace/community-operators-wkrfs" Feb 18 19:57:28 crc kubenswrapper[4932]: I0218 19:57:28.965155 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tks82\" (UniqueName: \"kubernetes.io/projected/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-kube-api-access-tks82\") pod \"community-operators-wkrfs\" (UID: \"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a\") " pod="openshift-marketplace/community-operators-wkrfs" Feb 18 19:57:28 crc kubenswrapper[4932]: I0218 19:57:28.965316 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-catalog-content\") pod \"community-operators-wkrfs\" (UID: \"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a\") " pod="openshift-marketplace/community-operators-wkrfs" Feb 18 19:57:28 crc kubenswrapper[4932]: I0218 19:57:28.965337 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-utilities\") pod \"community-operators-wkrfs\" (UID: \"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a\") " pod="openshift-marketplace/community-operators-wkrfs" Feb 18 19:57:28 crc kubenswrapper[4932]: I0218 19:57:28.965787 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-catalog-content\") pod \"community-operators-wkrfs\" (UID: \"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a\") " pod="openshift-marketplace/community-operators-wkrfs" Feb 18 19:57:28 crc kubenswrapper[4932]: I0218 19:57:28.984379 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tks82\" (UniqueName: \"kubernetes.io/projected/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-kube-api-access-tks82\") pod \"community-operators-wkrfs\" (UID: \"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a\") " pod="openshift-marketplace/community-operators-wkrfs" Feb 18 19:57:29 crc kubenswrapper[4932]: I0218 19:57:29.126799 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wkrfs" Feb 18 19:57:29 crc kubenswrapper[4932]: I0218 19:57:29.782303 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wkrfs"] Feb 18 19:57:29 crc kubenswrapper[4932]: I0218 19:57:29.810563 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nvplf" Feb 18 19:57:29 crc kubenswrapper[4932]: I0218 19:57:29.810607 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nvplf" Feb 18 19:57:30 crc kubenswrapper[4932]: I0218 19:57:30.249459 4932 generic.go:334] "Generic (PLEG): container finished" podID="8bacbe3a-bfae-4502-806c-ba2eb1c7b48a" containerID="ad0a1a0ef58ce7ddd6f673bb5c43ca464bc7513e1887943bb24ac52c5d233285" exitCode=0 Feb 18 19:57:30 crc kubenswrapper[4932]: I0218 19:57:30.249522 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkrfs" event={"ID":"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a","Type":"ContainerDied","Data":"ad0a1a0ef58ce7ddd6f673bb5c43ca464bc7513e1887943bb24ac52c5d233285"} Feb 18 19:57:30 crc kubenswrapper[4932]: I0218 19:57:30.249789 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkrfs" event={"ID":"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a","Type":"ContainerStarted","Data":"78e974bf0eef1be4d5c43dcef00fe5ff25f566422b51bca7b7994c8bd4a0c17b"} Feb 18 19:57:30 crc kubenswrapper[4932]: I0218 19:57:30.861257 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nvplf" podUID="cdcdbe71-7ce4-4038-b13a-345f14b7a80d" containerName="registry-server" probeResult="failure" output=< Feb 18 19:57:30 crc kubenswrapper[4932]: timeout: failed to connect service ":50051" within 1s Feb 18 19:57:30 crc kubenswrapper[4932]: > Feb 18 19:57:32 crc kubenswrapper[4932]: I0218 19:57:32.274868 4932 generic.go:334] "Generic (PLEG): container finished" podID="8bacbe3a-bfae-4502-806c-ba2eb1c7b48a" containerID="9c70c19586e4eae185290e512093a1c0d0c7ee1a9dc0f42ab963fb66393b295d" exitCode=0 Feb 18 19:57:32 crc kubenswrapper[4932]: I0218 19:57:32.274929 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkrfs" event={"ID":"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a","Type":"ContainerDied","Data":"9c70c19586e4eae185290e512093a1c0d0c7ee1a9dc0f42ab963fb66393b295d"} Feb 18 19:57:33 crc kubenswrapper[4932]: I0218 19:57:33.290043 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkrfs" event={"ID":"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a","Type":"ContainerStarted","Data":"82cf683866e6cd62f872123548fd1bbe79b9d748a853e4b5c30771df8283f708"} Feb 18 19:57:33 crc kubenswrapper[4932]: I0218 19:57:33.313384 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wkrfs" podStartSLOduration=2.628125552 podStartE2EDuration="5.313362018s" podCreationTimestamp="2026-02-18 19:57:28 +0000 UTC" firstStartedPulling="2026-02-18 19:57:30.251407874 +0000 UTC m=+1413.833362719" lastFinishedPulling="2026-02-18 19:57:32.93664434 +0000 UTC m=+1416.518599185" observedRunningTime="2026-02-18 19:57:33.309707856 +0000 UTC m=+1416.891662711" watchObservedRunningTime="2026-02-18 19:57:33.313362018 +0000 UTC m=+1416.895316863" Feb 18 19:57:35 crc kubenswrapper[4932]: I0218 19:57:35.454751 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 19:57:35 crc kubenswrapper[4932]: I0218 19:57:35.455430 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 19:57:35 crc kubenswrapper[4932]: I0218 19:57:35.459250 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 19:57:35 crc kubenswrapper[4932]: I0218 19:57:35.463983 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 19:57:36 crc kubenswrapper[4932]: I0218 19:57:36.331950 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 19:57:36 crc kubenswrapper[4932]: I0218 19:57:36.343396 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 19:57:39 crc kubenswrapper[4932]: I0218 19:57:39.127328 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wkrfs" Feb 18 19:57:39 crc kubenswrapper[4932]: I0218 19:57:39.127676 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wkrfs" Feb 18 19:57:40 crc kubenswrapper[4932]: I0218 19:57:40.180652 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-wkrfs" podUID="8bacbe3a-bfae-4502-806c-ba2eb1c7b48a" containerName="registry-server" probeResult="failure" output=< Feb 18 19:57:40 crc kubenswrapper[4932]: timeout: failed to connect service ":50051" within 1s Feb 18 19:57:40 crc kubenswrapper[4932]: > Feb 18 19:57:40 crc kubenswrapper[4932]: I0218 19:57:40.853710 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nvplf" podUID="cdcdbe71-7ce4-4038-b13a-345f14b7a80d" containerName="registry-server" probeResult="failure" output=< Feb 18 19:57:40 crc kubenswrapper[4932]: timeout: failed to connect service ":50051" within 1s Feb 18 19:57:40 crc kubenswrapper[4932]: > Feb 18 19:57:48 crc kubenswrapper[4932]: I0218 19:57:48.500441 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 19:57:49 crc kubenswrapper[4932]: I0218 19:57:49.174592 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wkrfs" Feb 18 19:57:49 crc kubenswrapper[4932]: I0218 19:57:49.248254 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wkrfs" Feb 18 19:57:49 crc kubenswrapper[4932]: I0218 19:57:49.413064 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wkrfs"] Feb 18 19:57:50 crc kubenswrapper[4932]: I0218 19:57:50.479313 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wkrfs" podUID="8bacbe3a-bfae-4502-806c-ba2eb1c7b48a" containerName="registry-server" containerID="cri-o://82cf683866e6cd62f872123548fd1bbe79b9d748a853e4b5c30771df8283f708" gracePeriod=2 Feb 18 19:57:50 crc kubenswrapper[4932]: I0218 19:57:50.862112 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nvplf" podUID="cdcdbe71-7ce4-4038-b13a-345f14b7a80d" containerName="registry-server" probeResult="failure" output=< Feb 18 19:57:50 crc kubenswrapper[4932]: timeout: failed to connect service ":50051" within 1s Feb 18 19:57:50 crc kubenswrapper[4932]: > Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.115012 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wkrfs" Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.175146 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tks82\" (UniqueName: \"kubernetes.io/projected/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-kube-api-access-tks82\") pod \"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a\" (UID: \"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a\") " Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.175303 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-catalog-content\") pod \"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a\" (UID: \"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a\") " Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.175388 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-utilities\") pod \"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a\" (UID: \"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a\") " Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.176224 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-utilities" (OuterVolumeSpecName: "utilities") pod "8bacbe3a-bfae-4502-806c-ba2eb1c7b48a" (UID: "8bacbe3a-bfae-4502-806c-ba2eb1c7b48a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.198323 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-kube-api-access-tks82" (OuterVolumeSpecName: "kube-api-access-tks82") pod "8bacbe3a-bfae-4502-806c-ba2eb1c7b48a" (UID: "8bacbe3a-bfae-4502-806c-ba2eb1c7b48a"). InnerVolumeSpecName "kube-api-access-tks82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.238380 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8bacbe3a-bfae-4502-806c-ba2eb1c7b48a" (UID: "8bacbe3a-bfae-4502-806c-ba2eb1c7b48a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.277289 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.277325 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.277339 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tks82\" (UniqueName: \"kubernetes.io/projected/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a-kube-api-access-tks82\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.490624 4932 generic.go:334] "Generic (PLEG): container finished" podID="8bacbe3a-bfae-4502-806c-ba2eb1c7b48a" containerID="82cf683866e6cd62f872123548fd1bbe79b9d748a853e4b5c30771df8283f708" exitCode=0 Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.490694 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wkrfs" Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.490698 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkrfs" event={"ID":"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a","Type":"ContainerDied","Data":"82cf683866e6cd62f872123548fd1bbe79b9d748a853e4b5c30771df8283f708"} Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.490870 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wkrfs" event={"ID":"8bacbe3a-bfae-4502-806c-ba2eb1c7b48a","Type":"ContainerDied","Data":"78e974bf0eef1be4d5c43dcef00fe5ff25f566422b51bca7b7994c8bd4a0c17b"} Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.490933 4932 scope.go:117] "RemoveContainer" containerID="82cf683866e6cd62f872123548fd1bbe79b9d748a853e4b5c30771df8283f708" Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.514952 4932 scope.go:117] "RemoveContainer" containerID="9c70c19586e4eae185290e512093a1c0d0c7ee1a9dc0f42ab963fb66393b295d" Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.535003 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wkrfs"] Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.552844 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wkrfs"] Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.557264 4932 scope.go:117] "RemoveContainer" containerID="ad0a1a0ef58ce7ddd6f673bb5c43ca464bc7513e1887943bb24ac52c5d233285" Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.595654 4932 scope.go:117] "RemoveContainer" containerID="82cf683866e6cd62f872123548fd1bbe79b9d748a853e4b5c30771df8283f708" Feb 18 19:57:51 crc kubenswrapper[4932]: E0218 19:57:51.596169 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82cf683866e6cd62f872123548fd1bbe79b9d748a853e4b5c30771df8283f708\": container with ID starting with 82cf683866e6cd62f872123548fd1bbe79b9d748a853e4b5c30771df8283f708 not found: ID does not exist" containerID="82cf683866e6cd62f872123548fd1bbe79b9d748a853e4b5c30771df8283f708" Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.596299 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82cf683866e6cd62f872123548fd1bbe79b9d748a853e4b5c30771df8283f708"} err="failed to get container status \"82cf683866e6cd62f872123548fd1bbe79b9d748a853e4b5c30771df8283f708\": rpc error: code = NotFound desc = could not find container \"82cf683866e6cd62f872123548fd1bbe79b9d748a853e4b5c30771df8283f708\": container with ID starting with 82cf683866e6cd62f872123548fd1bbe79b9d748a853e4b5c30771df8283f708 not found: ID does not exist" Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.596334 4932 scope.go:117] "RemoveContainer" containerID="9c70c19586e4eae185290e512093a1c0d0c7ee1a9dc0f42ab963fb66393b295d" Feb 18 19:57:51 crc kubenswrapper[4932]: E0218 19:57:51.596839 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c70c19586e4eae185290e512093a1c0d0c7ee1a9dc0f42ab963fb66393b295d\": container with ID starting with 9c70c19586e4eae185290e512093a1c0d0c7ee1a9dc0f42ab963fb66393b295d not found: ID does not exist" containerID="9c70c19586e4eae185290e512093a1c0d0c7ee1a9dc0f42ab963fb66393b295d" Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.596863 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c70c19586e4eae185290e512093a1c0d0c7ee1a9dc0f42ab963fb66393b295d"} err="failed to get container status \"9c70c19586e4eae185290e512093a1c0d0c7ee1a9dc0f42ab963fb66393b295d\": rpc error: code = NotFound desc = could not find container \"9c70c19586e4eae185290e512093a1c0d0c7ee1a9dc0f42ab963fb66393b295d\": container with ID starting with 9c70c19586e4eae185290e512093a1c0d0c7ee1a9dc0f42ab963fb66393b295d not found: ID does not exist" Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.596876 4932 scope.go:117] "RemoveContainer" containerID="ad0a1a0ef58ce7ddd6f673bb5c43ca464bc7513e1887943bb24ac52c5d233285" Feb 18 19:57:51 crc kubenswrapper[4932]: E0218 19:57:51.597671 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad0a1a0ef58ce7ddd6f673bb5c43ca464bc7513e1887943bb24ac52c5d233285\": container with ID starting with ad0a1a0ef58ce7ddd6f673bb5c43ca464bc7513e1887943bb24ac52c5d233285 not found: ID does not exist" containerID="ad0a1a0ef58ce7ddd6f673bb5c43ca464bc7513e1887943bb24ac52c5d233285" Feb 18 19:57:51 crc kubenswrapper[4932]: I0218 19:57:51.597703 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad0a1a0ef58ce7ddd6f673bb5c43ca464bc7513e1887943bb24ac52c5d233285"} err="failed to get container status \"ad0a1a0ef58ce7ddd6f673bb5c43ca464bc7513e1887943bb24ac52c5d233285\": rpc error: code = NotFound desc = could not find container \"ad0a1a0ef58ce7ddd6f673bb5c43ca464bc7513e1887943bb24ac52c5d233285\": container with ID starting with ad0a1a0ef58ce7ddd6f673bb5c43ca464bc7513e1887943bb24ac52c5d233285 not found: ID does not exist" Feb 18 19:57:51 crc kubenswrapper[4932]: E0218 19:57:51.747167 4932 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bacbe3a_bfae_4502_806c_ba2eb1c7b48a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bacbe3a_bfae_4502_806c_ba2eb1c7b48a.slice/crio-78e974bf0eef1be4d5c43dcef00fe5ff25f566422b51bca7b7994c8bd4a0c17b\": RecentStats: unable to find data in memory cache]" Feb 18 19:57:53 crc kubenswrapper[4932]: I0218 19:57:53.193416 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bacbe3a-bfae-4502-806c-ba2eb1c7b48a" path="/var/lib/kubelet/pods/8bacbe3a-bfae-4502-806c-ba2eb1c7b48a/volumes" Feb 18 19:57:58 crc kubenswrapper[4932]: I0218 19:57:58.053837 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:57:59 crc kubenswrapper[4932]: I0218 19:57:59.010189 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:58:00 crc kubenswrapper[4932]: I0218 19:58:00.865833 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nvplf" podUID="cdcdbe71-7ce4-4038-b13a-345f14b7a80d" containerName="registry-server" probeResult="failure" output=< Feb 18 19:58:00 crc kubenswrapper[4932]: timeout: failed to connect service ":50051" within 1s Feb 18 19:58:00 crc kubenswrapper[4932]: > Feb 18 19:58:01 crc kubenswrapper[4932]: I0218 19:58:01.702966 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" containerName="rabbitmq" containerID="cri-o://7729eaea63a854a517a637abed7df32d4a9c6148c615614b5fc85be3ac6bd1d9" gracePeriod=604797 Feb 18 19:58:02 crc kubenswrapper[4932]: I0218 19:58:02.435183 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="cd547864-4d03-45ae-8bb1-10a360d36599" containerName="rabbitmq" containerID="cri-o://70c0ba22a4bf84fc3b05812bcef99a157180fd838ac2af05d6ca1de21cd9e980" gracePeriod=604797 Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.267598 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.460035 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-plugins\") pod \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.460105 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-tls\") pod \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.460214 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-plugins-conf\") pod \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.460247 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dtlp\" (UniqueName: \"kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-kube-api-access-2dtlp\") pod \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.460306 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-config-data\") pod \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.460323 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.460361 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-erlang-cookie\") pod \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.460410 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-pod-info\") pod \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.460460 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-server-conf\") pod \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.460472 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" (UID: "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.460498 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-confd\") pod \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.460534 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-erlang-cookie-secret\") pod \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\" (UID: \"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.460987 4932 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.461907 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" (UID: "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.462826 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" (UID: "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.467106 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" (UID: "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.468519 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-pod-info" (OuterVolumeSpecName: "pod-info") pod "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" (UID: "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.469574 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" (UID: "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.470636 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-kube-api-access-2dtlp" (OuterVolumeSpecName: "kube-api-access-2dtlp") pod "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" (UID: "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c"). InnerVolumeSpecName "kube-api-access-2dtlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.494627 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" (UID: "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.514702 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-config-data" (OuterVolumeSpecName: "config-data") pod "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" (UID: "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.525319 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-server-conf" (OuterVolumeSpecName: "server-conf") pod "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" (UID: "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.562755 4932 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.562791 4932 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.562801 4932 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.562810 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dtlp\" (UniqueName: \"kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-kube-api-access-2dtlp\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.562818 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.562841 4932 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.562850 4932 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.562858 4932 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-pod-info\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.562865 4932 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-server-conf\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.582630 4932 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.627932 4932 generic.go:334] "Generic (PLEG): container finished" podID="7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" containerID="7729eaea63a854a517a637abed7df32d4a9c6148c615614b5fc85be3ac6bd1d9" exitCode=0 Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.628035 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.628053 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c","Type":"ContainerDied","Data":"7729eaea63a854a517a637abed7df32d4a9c6148c615614b5fc85be3ac6bd1d9"} Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.628356 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7111c1ce-b213-40cc-ac5f-7c4b9e80be5c","Type":"ContainerDied","Data":"080ccaf3edee131274523286f1e1cdf3b8aebb0e277f6e516ffc7e73a0cc72c7"} Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.628401 4932 scope.go:117] "RemoveContainer" containerID="7729eaea63a854a517a637abed7df32d4a9c6148c615614b5fc85be3ac6bd1d9" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.631805 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" (UID: "7111c1ce-b213-40cc-ac5f-7c4b9e80be5c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.632073 4932 generic.go:334] "Generic (PLEG): container finished" podID="cd547864-4d03-45ae-8bb1-10a360d36599" containerID="70c0ba22a4bf84fc3b05812bcef99a157180fd838ac2af05d6ca1de21cd9e980" exitCode=0 Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.632107 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cd547864-4d03-45ae-8bb1-10a360d36599","Type":"ContainerDied","Data":"70c0ba22a4bf84fc3b05812bcef99a157180fd838ac2af05d6ca1de21cd9e980"} Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.664967 4932 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.665003 4932 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.690389 4932 scope.go:117] "RemoveContainer" containerID="9b22c88fcfefc922bca187e413f9cbdc5c39e702add0f5abab74ad8e01c84d8d" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.725000 4932 scope.go:117] "RemoveContainer" containerID="7729eaea63a854a517a637abed7df32d4a9c6148c615614b5fc85be3ac6bd1d9" Feb 18 19:58:03 crc kubenswrapper[4932]: E0218 19:58:03.725578 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7729eaea63a854a517a637abed7df32d4a9c6148c615614b5fc85be3ac6bd1d9\": container with ID starting with 7729eaea63a854a517a637abed7df32d4a9c6148c615614b5fc85be3ac6bd1d9 not found: ID does not exist" containerID="7729eaea63a854a517a637abed7df32d4a9c6148c615614b5fc85be3ac6bd1d9" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.725636 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7729eaea63a854a517a637abed7df32d4a9c6148c615614b5fc85be3ac6bd1d9"} err="failed to get container status \"7729eaea63a854a517a637abed7df32d4a9c6148c615614b5fc85be3ac6bd1d9\": rpc error: code = NotFound desc = could not find container \"7729eaea63a854a517a637abed7df32d4a9c6148c615614b5fc85be3ac6bd1d9\": container with ID starting with 7729eaea63a854a517a637abed7df32d4a9c6148c615614b5fc85be3ac6bd1d9 not found: ID does not exist" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.725677 4932 scope.go:117] "RemoveContainer" containerID="9b22c88fcfefc922bca187e413f9cbdc5c39e702add0f5abab74ad8e01c84d8d" Feb 18 19:58:03 crc kubenswrapper[4932]: E0218 19:58:03.726032 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b22c88fcfefc922bca187e413f9cbdc5c39e702add0f5abab74ad8e01c84d8d\": container with ID starting with 9b22c88fcfefc922bca187e413f9cbdc5c39e702add0f5abab74ad8e01c84d8d not found: ID does not exist" containerID="9b22c88fcfefc922bca187e413f9cbdc5c39e702add0f5abab74ad8e01c84d8d" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.726077 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b22c88fcfefc922bca187e413f9cbdc5c39e702add0f5abab74ad8e01c84d8d"} err="failed to get container status \"9b22c88fcfefc922bca187e413f9cbdc5c39e702add0f5abab74ad8e01c84d8d\": rpc error: code = NotFound desc = could not find container \"9b22c88fcfefc922bca187e413f9cbdc5c39e702add0f5abab74ad8e01c84d8d\": container with ID starting with 9b22c88fcfefc922bca187e413f9cbdc5c39e702add0f5abab74ad8e01c84d8d not found: ID does not exist" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.876399 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.971037 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-plugins-conf\") pod \"cd547864-4d03-45ae-8bb1-10a360d36599\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.971185 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cd547864-4d03-45ae-8bb1-10a360d36599-erlang-cookie-secret\") pod \"cd547864-4d03-45ae-8bb1-10a360d36599\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.971715 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-erlang-cookie\") pod \"cd547864-4d03-45ae-8bb1-10a360d36599\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.971756 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-tls\") pod \"cd547864-4d03-45ae-8bb1-10a360d36599\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.971798 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cd547864-4d03-45ae-8bb1-10a360d36599\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.971859 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-server-conf\") pod \"cd547864-4d03-45ae-8bb1-10a360d36599\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.973116 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "cd547864-4d03-45ae-8bb1-10a360d36599" (UID: "cd547864-4d03-45ae-8bb1-10a360d36599"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.973668 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "cd547864-4d03-45ae-8bb1-10a360d36599" (UID: "cd547864-4d03-45ae-8bb1-10a360d36599"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.974049 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-confd\") pod \"cd547864-4d03-45ae-8bb1-10a360d36599\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.974086 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-plugins\") pod \"cd547864-4d03-45ae-8bb1-10a360d36599\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.974123 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cd547864-4d03-45ae-8bb1-10a360d36599-pod-info\") pod \"cd547864-4d03-45ae-8bb1-10a360d36599\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.974149 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-config-data\") pod \"cd547864-4d03-45ae-8bb1-10a360d36599\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.974244 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwqrq\" (UniqueName: \"kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-kube-api-access-fwqrq\") pod \"cd547864-4d03-45ae-8bb1-10a360d36599\" (UID: \"cd547864-4d03-45ae-8bb1-10a360d36599\") " Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.974855 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "cd547864-4d03-45ae-8bb1-10a360d36599" (UID: "cd547864-4d03-45ae-8bb1-10a360d36599"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.977985 4932 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.978211 4932 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.978229 4932 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.994908 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-kube-api-access-fwqrq" (OuterVolumeSpecName: "kube-api-access-fwqrq") pod "cd547864-4d03-45ae-8bb1-10a360d36599" (UID: "cd547864-4d03-45ae-8bb1-10a360d36599"). InnerVolumeSpecName "kube-api-access-fwqrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:58:03 crc kubenswrapper[4932]: I0218 19:58:03.999720 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/cd547864-4d03-45ae-8bb1-10a360d36599-pod-info" (OuterVolumeSpecName: "pod-info") pod "cd547864-4d03-45ae-8bb1-10a360d36599" (UID: "cd547864-4d03-45ae-8bb1-10a360d36599"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.003550 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "cd547864-4d03-45ae-8bb1-10a360d36599" (UID: "cd547864-4d03-45ae-8bb1-10a360d36599"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.003780 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "cd547864-4d03-45ae-8bb1-10a360d36599" (UID: "cd547864-4d03-45ae-8bb1-10a360d36599"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.014338 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd547864-4d03-45ae-8bb1-10a360d36599-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "cd547864-4d03-45ae-8bb1-10a360d36599" (UID: "cd547864-4d03-45ae-8bb1-10a360d36599"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.028311 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-config-data" (OuterVolumeSpecName: "config-data") pod "cd547864-4d03-45ae-8bb1-10a360d36599" (UID: "cd547864-4d03-45ae-8bb1-10a360d36599"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.045505 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.079365 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.087584 4932 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cd547864-4d03-45ae-8bb1-10a360d36599-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.087615 4932 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.087648 4932 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.087658 4932 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cd547864-4d03-45ae-8bb1-10a360d36599-pod-info\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.087667 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.087703 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwqrq\" (UniqueName: \"kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-kube-api-access-fwqrq\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.107324 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-server-conf" (OuterVolumeSpecName: "server-conf") pod "cd547864-4d03-45ae-8bb1-10a360d36599" (UID: "cd547864-4d03-45ae-8bb1-10a360d36599"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.131107 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:58:04 crc kubenswrapper[4932]: E0218 19:58:04.131508 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" containerName="rabbitmq" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.131527 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" containerName="rabbitmq" Feb 18 19:58:04 crc kubenswrapper[4932]: E0218 19:58:04.131595 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" containerName="setup-container" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.131603 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" containerName="setup-container" Feb 18 19:58:04 crc kubenswrapper[4932]: E0218 19:58:04.131614 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd547864-4d03-45ae-8bb1-10a360d36599" containerName="rabbitmq" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.131621 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd547864-4d03-45ae-8bb1-10a360d36599" containerName="rabbitmq" Feb 18 19:58:04 crc kubenswrapper[4932]: E0218 19:58:04.131630 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bacbe3a-bfae-4502-806c-ba2eb1c7b48a" containerName="registry-server" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.131655 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bacbe3a-bfae-4502-806c-ba2eb1c7b48a" containerName="registry-server" Feb 18 19:58:04 crc kubenswrapper[4932]: E0218 19:58:04.131666 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bacbe3a-bfae-4502-806c-ba2eb1c7b48a" containerName="extract-utilities" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.131672 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bacbe3a-bfae-4502-806c-ba2eb1c7b48a" containerName="extract-utilities" Feb 18 19:58:04 crc kubenswrapper[4932]: E0218 19:58:04.131684 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bacbe3a-bfae-4502-806c-ba2eb1c7b48a" containerName="extract-content" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.131690 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bacbe3a-bfae-4502-806c-ba2eb1c7b48a" containerName="extract-content" Feb 18 19:58:04 crc kubenswrapper[4932]: E0218 19:58:04.131702 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd547864-4d03-45ae-8bb1-10a360d36599" containerName="setup-container" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.131708 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd547864-4d03-45ae-8bb1-10a360d36599" containerName="setup-container" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.131988 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bacbe3a-bfae-4502-806c-ba2eb1c7b48a" containerName="registry-server" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.132005 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd547864-4d03-45ae-8bb1-10a360d36599" containerName="rabbitmq" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.132051 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" containerName="rabbitmq" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.140294 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.144259 4932 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.151050 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ptcgt" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.151291 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.151454 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.152201 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.153125 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.154460 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.160628 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.163161 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.189284 4932 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.189351 4932 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cd547864-4d03-45ae-8bb1-10a360d36599-server-conf\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.211035 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "cd547864-4d03-45ae-8bb1-10a360d36599" (UID: "cd547864-4d03-45ae-8bb1-10a360d36599"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.291285 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7jld\" (UniqueName: \"kubernetes.io/projected/d466e51b-87dc-413f-aeb2-f3566a46eeb5-kube-api-access-p7jld\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.291343 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d466e51b-87dc-413f-aeb2-f3566a46eeb5-config-data\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.291514 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d466e51b-87dc-413f-aeb2-f3566a46eeb5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.291552 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d466e51b-87dc-413f-aeb2-f3566a46eeb5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.291576 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d466e51b-87dc-413f-aeb2-f3566a46eeb5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.291672 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d466e51b-87dc-413f-aeb2-f3566a46eeb5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.291708 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d466e51b-87dc-413f-aeb2-f3566a46eeb5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.291750 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.291808 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d466e51b-87dc-413f-aeb2-f3566a46eeb5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.291919 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d466e51b-87dc-413f-aeb2-f3566a46eeb5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.292278 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d466e51b-87dc-413f-aeb2-f3566a46eeb5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.292362 4932 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cd547864-4d03-45ae-8bb1-10a360d36599-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.394526 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d466e51b-87dc-413f-aeb2-f3566a46eeb5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.394579 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d466e51b-87dc-413f-aeb2-f3566a46eeb5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.394660 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d466e51b-87dc-413f-aeb2-f3566a46eeb5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.394706 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7jld\" (UniqueName: \"kubernetes.io/projected/d466e51b-87dc-413f-aeb2-f3566a46eeb5-kube-api-access-p7jld\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.394734 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d466e51b-87dc-413f-aeb2-f3566a46eeb5-config-data\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.394798 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d466e51b-87dc-413f-aeb2-f3566a46eeb5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.394861 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d466e51b-87dc-413f-aeb2-f3566a46eeb5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.394908 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d466e51b-87dc-413f-aeb2-f3566a46eeb5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.394953 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d466e51b-87dc-413f-aeb2-f3566a46eeb5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.394984 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d466e51b-87dc-413f-aeb2-f3566a46eeb5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.395013 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.395828 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d466e51b-87dc-413f-aeb2-f3566a46eeb5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.396749 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d466e51b-87dc-413f-aeb2-f3566a46eeb5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.397603 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d466e51b-87dc-413f-aeb2-f3566a46eeb5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.398148 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d466e51b-87dc-413f-aeb2-f3566a46eeb5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.399029 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d466e51b-87dc-413f-aeb2-f3566a46eeb5-config-data\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.399715 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.402208 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d466e51b-87dc-413f-aeb2-f3566a46eeb5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.403351 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d466e51b-87dc-413f-aeb2-f3566a46eeb5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.403589 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d466e51b-87dc-413f-aeb2-f3566a46eeb5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.404674 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d466e51b-87dc-413f-aeb2-f3566a46eeb5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.422426 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7jld\" (UniqueName: \"kubernetes.io/projected/d466e51b-87dc-413f-aeb2-f3566a46eeb5-kube-api-access-p7jld\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.444886 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"d466e51b-87dc-413f-aeb2-f3566a46eeb5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.496160 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.643548 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cd547864-4d03-45ae-8bb1-10a360d36599","Type":"ContainerDied","Data":"df1d9be37e083e5a4584427f91148d70b49af32f754e3fd54a2d761cb7b0f9e2"} Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.643608 4932 scope.go:117] "RemoveContainer" containerID="70c0ba22a4bf84fc3b05812bcef99a157180fd838ac2af05d6ca1de21cd9e980" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.643770 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.674225 4932 scope.go:117] "RemoveContainer" containerID="7410562445bbd85ecddd8f8fa1c64974cd82f5bccf5b814dba01368f2c897a68" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.745119 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.757254 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.786866 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.788524 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.793710 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.793888 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.794041 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-l229h" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.794185 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.794295 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.794401 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.794550 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.817973 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.906288 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da761aa0-8599-4aee-9078-ecaf2a04f259-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.906396 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/da761aa0-8599-4aee-9078-ecaf2a04f259-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.906452 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/da761aa0-8599-4aee-9078-ecaf2a04f259-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.906709 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/da761aa0-8599-4aee-9078-ecaf2a04f259-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.906794 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/da761aa0-8599-4aee-9078-ecaf2a04f259-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.906835 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.906871 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/da761aa0-8599-4aee-9078-ecaf2a04f259-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.906971 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57r74\" (UniqueName: \"kubernetes.io/projected/da761aa0-8599-4aee-9078-ecaf2a04f259-kube-api-access-57r74\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.907078 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/da761aa0-8599-4aee-9078-ecaf2a04f259-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.907204 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/da761aa0-8599-4aee-9078-ecaf2a04f259-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:04 crc kubenswrapper[4932]: I0218 19:58:04.907249 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/da761aa0-8599-4aee-9078-ecaf2a04f259-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.008399 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/da761aa0-8599-4aee-9078-ecaf2a04f259-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.008461 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/da761aa0-8599-4aee-9078-ecaf2a04f259-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.008490 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.008521 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/da761aa0-8599-4aee-9078-ecaf2a04f259-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.008551 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57r74\" (UniqueName: \"kubernetes.io/projected/da761aa0-8599-4aee-9078-ecaf2a04f259-kube-api-access-57r74\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.008575 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/da761aa0-8599-4aee-9078-ecaf2a04f259-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.008609 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/da761aa0-8599-4aee-9078-ecaf2a04f259-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.008634 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/da761aa0-8599-4aee-9078-ecaf2a04f259-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.008655 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da761aa0-8599-4aee-9078-ecaf2a04f259-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.008701 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/da761aa0-8599-4aee-9078-ecaf2a04f259-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.008716 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/da761aa0-8599-4aee-9078-ecaf2a04f259-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.009211 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.009681 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/da761aa0-8599-4aee-9078-ecaf2a04f259-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.010360 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/da761aa0-8599-4aee-9078-ecaf2a04f259-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.010383 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/da761aa0-8599-4aee-9078-ecaf2a04f259-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.010468 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/da761aa0-8599-4aee-9078-ecaf2a04f259-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.010570 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da761aa0-8599-4aee-9078-ecaf2a04f259-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.016260 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/da761aa0-8599-4aee-9078-ecaf2a04f259-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.016986 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/da761aa0-8599-4aee-9078-ecaf2a04f259-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.017427 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/da761aa0-8599-4aee-9078-ecaf2a04f259-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.017726 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/da761aa0-8599-4aee-9078-ecaf2a04f259-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.035998 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57r74\" (UniqueName: \"kubernetes.io/projected/da761aa0-8599-4aee-9078-ecaf2a04f259-kube-api-access-57r74\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.049588 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"da761aa0-8599-4aee-9078-ecaf2a04f259\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.054462 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.114000 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.199012 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7111c1ce-b213-40cc-ac5f-7c4b9e80be5c" path="/var/lib/kubelet/pods/7111c1ce-b213-40cc-ac5f-7c4b9e80be5c/volumes" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.200281 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd547864-4d03-45ae-8bb1-10a360d36599" path="/var/lib/kubelet/pods/cd547864-4d03-45ae-8bb1-10a360d36599/volumes" Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.594632 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:58:05 crc kubenswrapper[4932]: W0218 19:58:05.598589 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda761aa0_8599_4aee_9078_ecaf2a04f259.slice/crio-00d28f70a7a85d9160fc7bed44cc3914e208f3266c739718ce43a791791deb41 WatchSource:0}: Error finding container 00d28f70a7a85d9160fc7bed44cc3914e208f3266c739718ce43a791791deb41: Status 404 returned error can't find the container with id 00d28f70a7a85d9160fc7bed44cc3914e208f3266c739718ce43a791791deb41 Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.664268 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d466e51b-87dc-413f-aeb2-f3566a46eeb5","Type":"ContainerStarted","Data":"c8549d8b3a7b9f36c193177519d51b37e168e6fc798904ac0942bb7314cea96a"} Feb 18 19:58:05 crc kubenswrapper[4932]: I0218 19:58:05.666803 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"da761aa0-8599-4aee-9078-ecaf2a04f259","Type":"ContainerStarted","Data":"00d28f70a7a85d9160fc7bed44cc3914e208f3266c739718ce43a791791deb41"} Feb 18 19:58:07 crc kubenswrapper[4932]: I0218 19:58:07.691768 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"da761aa0-8599-4aee-9078-ecaf2a04f259","Type":"ContainerStarted","Data":"1b8037c10dd0ae5363e01ddd9be7d861df2dac91332a086e6a5ad5c81c97cf0c"} Feb 18 19:58:07 crc kubenswrapper[4932]: I0218 19:58:07.695272 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d466e51b-87dc-413f-aeb2-f3566a46eeb5","Type":"ContainerStarted","Data":"fdbbac4e492980e8a9121b26a7487b983c016efa01fd41436c950b17336cea34"} Feb 18 19:58:09 crc kubenswrapper[4932]: I0218 19:58:09.914060 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nvplf" Feb 18 19:58:09 crc kubenswrapper[4932]: I0218 19:58:09.963537 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nvplf" Feb 18 19:58:10 crc kubenswrapper[4932]: I0218 19:58:10.155894 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nvplf"] Feb 18 19:58:11 crc kubenswrapper[4932]: I0218 19:58:11.739997 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nvplf" podUID="cdcdbe71-7ce4-4038-b13a-345f14b7a80d" containerName="registry-server" containerID="cri-o://09aacb7c747003e38e25e2bbf7bc0a125089dfaae5ab96a6e01ac3609c03577a" gracePeriod=2 Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.169585 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nvplf" Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.279488 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npzvf\" (UniqueName: \"kubernetes.io/projected/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-kube-api-access-npzvf\") pod \"cdcdbe71-7ce4-4038-b13a-345f14b7a80d\" (UID: \"cdcdbe71-7ce4-4038-b13a-345f14b7a80d\") " Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.282612 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-catalog-content\") pod \"cdcdbe71-7ce4-4038-b13a-345f14b7a80d\" (UID: \"cdcdbe71-7ce4-4038-b13a-345f14b7a80d\") " Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.284775 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-utilities\") pod \"cdcdbe71-7ce4-4038-b13a-345f14b7a80d\" (UID: \"cdcdbe71-7ce4-4038-b13a-345f14b7a80d\") " Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.285597 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-utilities" (OuterVolumeSpecName: "utilities") pod "cdcdbe71-7ce4-4038-b13a-345f14b7a80d" (UID: "cdcdbe71-7ce4-4038-b13a-345f14b7a80d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.286133 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.295476 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-kube-api-access-npzvf" (OuterVolumeSpecName: "kube-api-access-npzvf") pod "cdcdbe71-7ce4-4038-b13a-345f14b7a80d" (UID: "cdcdbe71-7ce4-4038-b13a-345f14b7a80d"). InnerVolumeSpecName "kube-api-access-npzvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.388348 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npzvf\" (UniqueName: \"kubernetes.io/projected/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-kube-api-access-npzvf\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.391814 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cdcdbe71-7ce4-4038-b13a-345f14b7a80d" (UID: "cdcdbe71-7ce4-4038-b13a-345f14b7a80d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.490511 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdcdbe71-7ce4-4038-b13a-345f14b7a80d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.751442 4932 generic.go:334] "Generic (PLEG): container finished" podID="cdcdbe71-7ce4-4038-b13a-345f14b7a80d" containerID="09aacb7c747003e38e25e2bbf7bc0a125089dfaae5ab96a6e01ac3609c03577a" exitCode=0 Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.751498 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvplf" event={"ID":"cdcdbe71-7ce4-4038-b13a-345f14b7a80d","Type":"ContainerDied","Data":"09aacb7c747003e38e25e2bbf7bc0a125089dfaae5ab96a6e01ac3609c03577a"} Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.751538 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nvplf" event={"ID":"cdcdbe71-7ce4-4038-b13a-345f14b7a80d","Type":"ContainerDied","Data":"bd6d1dac6bf3ebca465127b4e668733d6b3eab206b93e857a3ffc9cc951ff030"} Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.751577 4932 scope.go:117] "RemoveContainer" containerID="09aacb7c747003e38e25e2bbf7bc0a125089dfaae5ab96a6e01ac3609c03577a" Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.751583 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nvplf" Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.799803 4932 scope.go:117] "RemoveContainer" containerID="023c78e065bfbea5d860b9d76242fdaf6508e65d3a7b9b6dac29626062f231a8" Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.808042 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nvplf"] Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.816976 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nvplf"] Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.829372 4932 scope.go:117] "RemoveContainer" containerID="6d8533cef25116c5f8b7055450fa56ca2cd04d828ddbd49de9ee4a7c3d56b99a" Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.872858 4932 scope.go:117] "RemoveContainer" containerID="09aacb7c747003e38e25e2bbf7bc0a125089dfaae5ab96a6e01ac3609c03577a" Feb 18 19:58:12 crc kubenswrapper[4932]: E0218 19:58:12.873480 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09aacb7c747003e38e25e2bbf7bc0a125089dfaae5ab96a6e01ac3609c03577a\": container with ID starting with 09aacb7c747003e38e25e2bbf7bc0a125089dfaae5ab96a6e01ac3609c03577a not found: ID does not exist" containerID="09aacb7c747003e38e25e2bbf7bc0a125089dfaae5ab96a6e01ac3609c03577a" Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.873537 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09aacb7c747003e38e25e2bbf7bc0a125089dfaae5ab96a6e01ac3609c03577a"} err="failed to get container status \"09aacb7c747003e38e25e2bbf7bc0a125089dfaae5ab96a6e01ac3609c03577a\": rpc error: code = NotFound desc = could not find container \"09aacb7c747003e38e25e2bbf7bc0a125089dfaae5ab96a6e01ac3609c03577a\": container with ID starting with 09aacb7c747003e38e25e2bbf7bc0a125089dfaae5ab96a6e01ac3609c03577a not found: ID does not exist" Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.873570 4932 scope.go:117] "RemoveContainer" containerID="023c78e065bfbea5d860b9d76242fdaf6508e65d3a7b9b6dac29626062f231a8" Feb 18 19:58:12 crc kubenswrapper[4932]: E0218 19:58:12.874069 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"023c78e065bfbea5d860b9d76242fdaf6508e65d3a7b9b6dac29626062f231a8\": container with ID starting with 023c78e065bfbea5d860b9d76242fdaf6508e65d3a7b9b6dac29626062f231a8 not found: ID does not exist" containerID="023c78e065bfbea5d860b9d76242fdaf6508e65d3a7b9b6dac29626062f231a8" Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.874153 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"023c78e065bfbea5d860b9d76242fdaf6508e65d3a7b9b6dac29626062f231a8"} err="failed to get container status \"023c78e065bfbea5d860b9d76242fdaf6508e65d3a7b9b6dac29626062f231a8\": rpc error: code = NotFound desc = could not find container \"023c78e065bfbea5d860b9d76242fdaf6508e65d3a7b9b6dac29626062f231a8\": container with ID starting with 023c78e065bfbea5d860b9d76242fdaf6508e65d3a7b9b6dac29626062f231a8 not found: ID does not exist" Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.874230 4932 scope.go:117] "RemoveContainer" containerID="6d8533cef25116c5f8b7055450fa56ca2cd04d828ddbd49de9ee4a7c3d56b99a" Feb 18 19:58:12 crc kubenswrapper[4932]: E0218 19:58:12.875587 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d8533cef25116c5f8b7055450fa56ca2cd04d828ddbd49de9ee4a7c3d56b99a\": container with ID starting with 6d8533cef25116c5f8b7055450fa56ca2cd04d828ddbd49de9ee4a7c3d56b99a not found: ID does not exist" containerID="6d8533cef25116c5f8b7055450fa56ca2cd04d828ddbd49de9ee4a7c3d56b99a" Feb 18 19:58:12 crc kubenswrapper[4932]: I0218 19:58:12.875687 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d8533cef25116c5f8b7055450fa56ca2cd04d828ddbd49de9ee4a7c3d56b99a"} err="failed to get container status \"6d8533cef25116c5f8b7055450fa56ca2cd04d828ddbd49de9ee4a7c3d56b99a\": rpc error: code = NotFound desc = could not find container \"6d8533cef25116c5f8b7055450fa56ca2cd04d828ddbd49de9ee4a7c3d56b99a\": container with ID starting with 6d8533cef25116c5f8b7055450fa56ca2cd04d828ddbd49de9ee4a7c3d56b99a not found: ID does not exist" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.194026 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdcdbe71-7ce4-4038-b13a-345f14b7a80d" path="/var/lib/kubelet/pods/cdcdbe71-7ce4-4038-b13a-345f14b7a80d/volumes" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.631083 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9ff754475-2bjzt"] Feb 18 19:58:13 crc kubenswrapper[4932]: E0218 19:58:13.631469 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdcdbe71-7ce4-4038-b13a-345f14b7a80d" containerName="extract-content" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.631486 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdcdbe71-7ce4-4038-b13a-345f14b7a80d" containerName="extract-content" Feb 18 19:58:13 crc kubenswrapper[4932]: E0218 19:58:13.631528 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdcdbe71-7ce4-4038-b13a-345f14b7a80d" containerName="registry-server" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.631535 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdcdbe71-7ce4-4038-b13a-345f14b7a80d" containerName="registry-server" Feb 18 19:58:13 crc kubenswrapper[4932]: E0218 19:58:13.631548 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdcdbe71-7ce4-4038-b13a-345f14b7a80d" containerName="extract-utilities" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.631555 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdcdbe71-7ce4-4038-b13a-345f14b7a80d" containerName="extract-utilities" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.631741 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdcdbe71-7ce4-4038-b13a-345f14b7a80d" containerName="registry-server" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.633106 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.635536 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.647570 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9ff754475-2bjzt"] Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.714755 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-ovsdbserver-sb\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.714863 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-dns-svc\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.714900 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pxmq\" (UniqueName: \"kubernetes.io/projected/96b80a13-2da6-4c91-a09d-1e935313a13f-kube-api-access-9pxmq\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.714972 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-ovsdbserver-nb\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.715024 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-openstack-edpm-ipam\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.715063 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-dns-swift-storage-0\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.715113 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-config\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.817717 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-config\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.817846 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-ovsdbserver-sb\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.817902 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-dns-svc\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.817939 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pxmq\" (UniqueName: \"kubernetes.io/projected/96b80a13-2da6-4c91-a09d-1e935313a13f-kube-api-access-9pxmq\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.817988 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-ovsdbserver-nb\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.818030 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-openstack-edpm-ipam\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.818089 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-dns-swift-storage-0\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.819965 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-dns-swift-storage-0\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.822114 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-ovsdbserver-sb\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.822205 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-openstack-edpm-ipam\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.822664 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-ovsdbserver-nb\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.823149 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-config\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.823385 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-dns-svc\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.849314 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pxmq\" (UniqueName: \"kubernetes.io/projected/96b80a13-2da6-4c91-a09d-1e935313a13f-kube-api-access-9pxmq\") pod \"dnsmasq-dns-9ff754475-2bjzt\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:13 crc kubenswrapper[4932]: I0218 19:58:13.966041 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:14 crc kubenswrapper[4932]: I0218 19:58:14.467109 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9ff754475-2bjzt"] Feb 18 19:58:14 crc kubenswrapper[4932]: I0218 19:58:14.776424 4932 generic.go:334] "Generic (PLEG): container finished" podID="96b80a13-2da6-4c91-a09d-1e935313a13f" containerID="296162c2e2b6e7e74bfb52a7dd47f5e5692cdadf7b2924591218a8984d84e2df" exitCode=0 Feb 18 19:58:14 crc kubenswrapper[4932]: I0218 19:58:14.776505 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9ff754475-2bjzt" event={"ID":"96b80a13-2da6-4c91-a09d-1e935313a13f","Type":"ContainerDied","Data":"296162c2e2b6e7e74bfb52a7dd47f5e5692cdadf7b2924591218a8984d84e2df"} Feb 18 19:58:14 crc kubenswrapper[4932]: I0218 19:58:14.776844 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9ff754475-2bjzt" event={"ID":"96b80a13-2da6-4c91-a09d-1e935313a13f","Type":"ContainerStarted","Data":"9e066ae757e52496269799f9e7d2df6157f05d0842da76118feae5596927b07d"} Feb 18 19:58:15 crc kubenswrapper[4932]: I0218 19:58:15.792590 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9ff754475-2bjzt" event={"ID":"96b80a13-2da6-4c91-a09d-1e935313a13f","Type":"ContainerStarted","Data":"40b4bfc44e288dd7c847df3dd0b2a945f9df48fd611e15411b34cc995f0f85cf"} Feb 18 19:58:15 crc kubenswrapper[4932]: I0218 19:58:15.792955 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:15 crc kubenswrapper[4932]: I0218 19:58:15.828801 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9ff754475-2bjzt" podStartSLOduration=2.8287835919999997 podStartE2EDuration="2.828783592s" podCreationTimestamp="2026-02-18 19:58:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:58:15.822837854 +0000 UTC m=+1459.404792729" watchObservedRunningTime="2026-02-18 19:58:15.828783592 +0000 UTC m=+1459.410738437" Feb 18 19:58:23 crc kubenswrapper[4932]: I0218 19:58:23.968355 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.051705 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c95b7c697-ptvr7"] Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.052061 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" podUID="f91611fc-84cb-4a52-8943-b4a5c7481f45" containerName="dnsmasq-dns" containerID="cri-o://6f7dae99fe8307a44c04e49e378446558a816a80863fddd37088532c2f9fd632" gracePeriod=10 Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.250383 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d5b5f5475-czsf7"] Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.269128 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d5b5f5475-czsf7"] Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.269269 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.378373 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.378657 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-dns-swift-storage-0\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.378739 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-dns-svc\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.378936 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt8pq\" (UniqueName: \"kubernetes.io/projected/f60b5155-406c-4c95-9848-2792faba2235-kube-api-access-pt8pq\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.378977 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-config\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.379026 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-ovsdbserver-sb\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.379120 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-ovsdbserver-nb\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.481211 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-ovsdbserver-nb\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.481324 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.481440 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-dns-swift-storage-0\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.481506 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-dns-svc\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.481586 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt8pq\" (UniqueName: \"kubernetes.io/projected/f60b5155-406c-4c95-9848-2792faba2235-kube-api-access-pt8pq\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.481616 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-config\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.481651 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-ovsdbserver-sb\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.482761 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-ovsdbserver-sb\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.483475 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-ovsdbserver-nb\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.483987 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-dns-svc\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.484454 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.484484 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-dns-swift-storage-0\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.485061 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f60b5155-406c-4c95-9848-2792faba2235-config\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.504972 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt8pq\" (UniqueName: \"kubernetes.io/projected/f60b5155-406c-4c95-9848-2792faba2235-kube-api-access-pt8pq\") pod \"dnsmasq-dns-5d5b5f5475-czsf7\" (UID: \"f60b5155-406c-4c95-9848-2792faba2235\") " pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.588486 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.598703 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.685488 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbxnb\" (UniqueName: \"kubernetes.io/projected/f91611fc-84cb-4a52-8943-b4a5c7481f45-kube-api-access-vbxnb\") pod \"f91611fc-84cb-4a52-8943-b4a5c7481f45\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.685542 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-config\") pod \"f91611fc-84cb-4a52-8943-b4a5c7481f45\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.685571 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-dns-svc\") pod \"f91611fc-84cb-4a52-8943-b4a5c7481f45\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.685630 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-ovsdbserver-nb\") pod \"f91611fc-84cb-4a52-8943-b4a5c7481f45\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.685658 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-dns-swift-storage-0\") pod \"f91611fc-84cb-4a52-8943-b4a5c7481f45\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.685729 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-ovsdbserver-sb\") pod \"f91611fc-84cb-4a52-8943-b4a5c7481f45\" (UID: \"f91611fc-84cb-4a52-8943-b4a5c7481f45\") " Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.689820 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f91611fc-84cb-4a52-8943-b4a5c7481f45-kube-api-access-vbxnb" (OuterVolumeSpecName: "kube-api-access-vbxnb") pod "f91611fc-84cb-4a52-8943-b4a5c7481f45" (UID: "f91611fc-84cb-4a52-8943-b4a5c7481f45"). InnerVolumeSpecName "kube-api-access-vbxnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.749491 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f91611fc-84cb-4a52-8943-b4a5c7481f45" (UID: "f91611fc-84cb-4a52-8943-b4a5c7481f45"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.774069 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-config" (OuterVolumeSpecName: "config") pod "f91611fc-84cb-4a52-8943-b4a5c7481f45" (UID: "f91611fc-84cb-4a52-8943-b4a5c7481f45"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.774250 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f91611fc-84cb-4a52-8943-b4a5c7481f45" (UID: "f91611fc-84cb-4a52-8943-b4a5c7481f45"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.781114 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f91611fc-84cb-4a52-8943-b4a5c7481f45" (UID: "f91611fc-84cb-4a52-8943-b4a5c7481f45"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.788953 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.788988 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbxnb\" (UniqueName: \"kubernetes.io/projected/f91611fc-84cb-4a52-8943-b4a5c7481f45-kube-api-access-vbxnb\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.789000 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.789010 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.789020 4932 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.796866 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f91611fc-84cb-4a52-8943-b4a5c7481f45" (UID: "f91611fc-84cb-4a52-8943-b4a5c7481f45"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.888423 4932 generic.go:334] "Generic (PLEG): container finished" podID="f91611fc-84cb-4a52-8943-b4a5c7481f45" containerID="6f7dae99fe8307a44c04e49e378446558a816a80863fddd37088532c2f9fd632" exitCode=0 Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.888485 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" event={"ID":"f91611fc-84cb-4a52-8943-b4a5c7481f45","Type":"ContainerDied","Data":"6f7dae99fe8307a44c04e49e378446558a816a80863fddd37088532c2f9fd632"} Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.888517 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" event={"ID":"f91611fc-84cb-4a52-8943-b4a5c7481f45","Type":"ContainerDied","Data":"655d5fb141738aad0155e62442b9035066c7a9ec2985b3b96a40dbf2d8892c36"} Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.888534 4932 scope.go:117] "RemoveContainer" containerID="6f7dae99fe8307a44c04e49e378446558a816a80863fddd37088532c2f9fd632" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.888768 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c95b7c697-ptvr7" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.896156 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f91611fc-84cb-4a52-8943-b4a5c7481f45-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.926118 4932 scope.go:117] "RemoveContainer" containerID="9419bf8be8f2a37b3cf214bf296de6889682ce2fc984ace13ff343025ac91c6e" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.937941 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c95b7c697-ptvr7"] Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.948488 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c95b7c697-ptvr7"] Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.957073 4932 scope.go:117] "RemoveContainer" containerID="6f7dae99fe8307a44c04e49e378446558a816a80863fddd37088532c2f9fd632" Feb 18 19:58:24 crc kubenswrapper[4932]: E0218 19:58:24.957868 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f7dae99fe8307a44c04e49e378446558a816a80863fddd37088532c2f9fd632\": container with ID starting with 6f7dae99fe8307a44c04e49e378446558a816a80863fddd37088532c2f9fd632 not found: ID does not exist" containerID="6f7dae99fe8307a44c04e49e378446558a816a80863fddd37088532c2f9fd632" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.957917 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f7dae99fe8307a44c04e49e378446558a816a80863fddd37088532c2f9fd632"} err="failed to get container status \"6f7dae99fe8307a44c04e49e378446558a816a80863fddd37088532c2f9fd632\": rpc error: code = NotFound desc = could not find container \"6f7dae99fe8307a44c04e49e378446558a816a80863fddd37088532c2f9fd632\": container with ID starting with 6f7dae99fe8307a44c04e49e378446558a816a80863fddd37088532c2f9fd632 not found: ID does not exist" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.957964 4932 scope.go:117] "RemoveContainer" containerID="9419bf8be8f2a37b3cf214bf296de6889682ce2fc984ace13ff343025ac91c6e" Feb 18 19:58:24 crc kubenswrapper[4932]: E0218 19:58:24.958450 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9419bf8be8f2a37b3cf214bf296de6889682ce2fc984ace13ff343025ac91c6e\": container with ID starting with 9419bf8be8f2a37b3cf214bf296de6889682ce2fc984ace13ff343025ac91c6e not found: ID does not exist" containerID="9419bf8be8f2a37b3cf214bf296de6889682ce2fc984ace13ff343025ac91c6e" Feb 18 19:58:24 crc kubenswrapper[4932]: I0218 19:58:24.958480 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9419bf8be8f2a37b3cf214bf296de6889682ce2fc984ace13ff343025ac91c6e"} err="failed to get container status \"9419bf8be8f2a37b3cf214bf296de6889682ce2fc984ace13ff343025ac91c6e\": rpc error: code = NotFound desc = could not find container \"9419bf8be8f2a37b3cf214bf296de6889682ce2fc984ace13ff343025ac91c6e\": container with ID starting with 9419bf8be8f2a37b3cf214bf296de6889682ce2fc984ace13ff343025ac91c6e not found: ID does not exist" Feb 18 19:58:25 crc kubenswrapper[4932]: I0218 19:58:25.069884 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d5b5f5475-czsf7"] Feb 18 19:58:25 crc kubenswrapper[4932]: I0218 19:58:25.197090 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f91611fc-84cb-4a52-8943-b4a5c7481f45" path="/var/lib/kubelet/pods/f91611fc-84cb-4a52-8943-b4a5c7481f45/volumes" Feb 18 19:58:25 crc kubenswrapper[4932]: I0218 19:58:25.901779 4932 generic.go:334] "Generic (PLEG): container finished" podID="f60b5155-406c-4c95-9848-2792faba2235" containerID="3a57ec4bc725cd3028981967c5dd1616d9b120d3aa5bb3014525c1e775a6bf41" exitCode=0 Feb 18 19:58:25 crc kubenswrapper[4932]: I0218 19:58:25.902125 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" event={"ID":"f60b5155-406c-4c95-9848-2792faba2235","Type":"ContainerDied","Data":"3a57ec4bc725cd3028981967c5dd1616d9b120d3aa5bb3014525c1e775a6bf41"} Feb 18 19:58:25 crc kubenswrapper[4932]: I0218 19:58:25.902158 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" event={"ID":"f60b5155-406c-4c95-9848-2792faba2235","Type":"ContainerStarted","Data":"634def3d091823f021dfcf5822b341cfc185c7fcb4324aea6b4a44455cbbe7db"} Feb 18 19:58:26 crc kubenswrapper[4932]: I0218 19:58:26.912961 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" event={"ID":"f60b5155-406c-4c95-9848-2792faba2235","Type":"ContainerStarted","Data":"edc9be894ecf757f4f8758c1d70ace3924bc64d1c2b1b352cd0d88079cc0d516"} Feb 18 19:58:26 crc kubenswrapper[4932]: I0218 19:58:26.915193 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:26 crc kubenswrapper[4932]: I0218 19:58:26.959049 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" podStartSLOduration=2.959019516 podStartE2EDuration="2.959019516s" podCreationTimestamp="2026-02-18 19:58:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:58:26.954487263 +0000 UTC m=+1470.536442108" watchObservedRunningTime="2026-02-18 19:58:26.959019516 +0000 UTC m=+1470.540974361" Feb 18 19:58:34 crc kubenswrapper[4932]: I0218 19:58:34.591449 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d5b5f5475-czsf7" Feb 18 19:58:34 crc kubenswrapper[4932]: I0218 19:58:34.664652 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9ff754475-2bjzt"] Feb 18 19:58:34 crc kubenswrapper[4932]: I0218 19:58:34.665097 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9ff754475-2bjzt" podUID="96b80a13-2da6-4c91-a09d-1e935313a13f" containerName="dnsmasq-dns" containerID="cri-o://40b4bfc44e288dd7c847df3dd0b2a945f9df48fd611e15411b34cc995f0f85cf" gracePeriod=10 Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.019459 4932 generic.go:334] "Generic (PLEG): container finished" podID="96b80a13-2da6-4c91-a09d-1e935313a13f" containerID="40b4bfc44e288dd7c847df3dd0b2a945f9df48fd611e15411b34cc995f0f85cf" exitCode=0 Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.019545 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9ff754475-2bjzt" event={"ID":"96b80a13-2da6-4c91-a09d-1e935313a13f","Type":"ContainerDied","Data":"40b4bfc44e288dd7c847df3dd0b2a945f9df48fd611e15411b34cc995f0f85cf"} Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.235275 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.321862 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-ovsdbserver-sb\") pod \"96b80a13-2da6-4c91-a09d-1e935313a13f\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.321948 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-openstack-edpm-ipam\") pod \"96b80a13-2da6-4c91-a09d-1e935313a13f\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.322014 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-dns-swift-storage-0\") pod \"96b80a13-2da6-4c91-a09d-1e935313a13f\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.322098 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-config\") pod \"96b80a13-2da6-4c91-a09d-1e935313a13f\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.322270 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pxmq\" (UniqueName: \"kubernetes.io/projected/96b80a13-2da6-4c91-a09d-1e935313a13f-kube-api-access-9pxmq\") pod \"96b80a13-2da6-4c91-a09d-1e935313a13f\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.322502 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-ovsdbserver-nb\") pod \"96b80a13-2da6-4c91-a09d-1e935313a13f\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.322649 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-dns-svc\") pod \"96b80a13-2da6-4c91-a09d-1e935313a13f\" (UID: \"96b80a13-2da6-4c91-a09d-1e935313a13f\") " Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.335454 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b80a13-2da6-4c91-a09d-1e935313a13f-kube-api-access-9pxmq" (OuterVolumeSpecName: "kube-api-access-9pxmq") pod "96b80a13-2da6-4c91-a09d-1e935313a13f" (UID: "96b80a13-2da6-4c91-a09d-1e935313a13f"). InnerVolumeSpecName "kube-api-access-9pxmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.385333 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "96b80a13-2da6-4c91-a09d-1e935313a13f" (UID: "96b80a13-2da6-4c91-a09d-1e935313a13f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.393494 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "96b80a13-2da6-4c91-a09d-1e935313a13f" (UID: "96b80a13-2da6-4c91-a09d-1e935313a13f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.398839 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-config" (OuterVolumeSpecName: "config") pod "96b80a13-2da6-4c91-a09d-1e935313a13f" (UID: "96b80a13-2da6-4c91-a09d-1e935313a13f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.399053 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "96b80a13-2da6-4c91-a09d-1e935313a13f" (UID: "96b80a13-2da6-4c91-a09d-1e935313a13f"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.405408 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "96b80a13-2da6-4c91-a09d-1e935313a13f" (UID: "96b80a13-2da6-4c91-a09d-1e935313a13f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.410673 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "96b80a13-2da6-4c91-a09d-1e935313a13f" (UID: "96b80a13-2da6-4c91-a09d-1e935313a13f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.425800 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.425837 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pxmq\" (UniqueName: \"kubernetes.io/projected/96b80a13-2da6-4c91-a09d-1e935313a13f-kube-api-access-9pxmq\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.425849 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.425859 4932 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.425867 4932 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.425877 4932 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:35 crc kubenswrapper[4932]: I0218 19:58:35.425918 4932 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/96b80a13-2da6-4c91-a09d-1e935313a13f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:36 crc kubenswrapper[4932]: I0218 19:58:36.029589 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9ff754475-2bjzt" event={"ID":"96b80a13-2da6-4c91-a09d-1e935313a13f","Type":"ContainerDied","Data":"9e066ae757e52496269799f9e7d2df6157f05d0842da76118feae5596927b07d"} Feb 18 19:58:36 crc kubenswrapper[4932]: I0218 19:58:36.029670 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9ff754475-2bjzt" Feb 18 19:58:36 crc kubenswrapper[4932]: I0218 19:58:36.029821 4932 scope.go:117] "RemoveContainer" containerID="40b4bfc44e288dd7c847df3dd0b2a945f9df48fd611e15411b34cc995f0f85cf" Feb 18 19:58:36 crc kubenswrapper[4932]: I0218 19:58:36.055919 4932 scope.go:117] "RemoveContainer" containerID="296162c2e2b6e7e74bfb52a7dd47f5e5692cdadf7b2924591218a8984d84e2df" Feb 18 19:58:36 crc kubenswrapper[4932]: I0218 19:58:36.066384 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9ff754475-2bjzt"] Feb 18 19:58:36 crc kubenswrapper[4932]: I0218 19:58:36.076715 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9ff754475-2bjzt"] Feb 18 19:58:37 crc kubenswrapper[4932]: I0218 19:58:37.196147 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b80a13-2da6-4c91-a09d-1e935313a13f" path="/var/lib/kubelet/pods/96b80a13-2da6-4c91-a09d-1e935313a13f/volumes" Feb 18 19:58:40 crc kubenswrapper[4932]: I0218 19:58:40.072782 4932 generic.go:334] "Generic (PLEG): container finished" podID="d466e51b-87dc-413f-aeb2-f3566a46eeb5" containerID="fdbbac4e492980e8a9121b26a7487b983c016efa01fd41436c950b17336cea34" exitCode=0 Feb 18 19:58:40 crc kubenswrapper[4932]: I0218 19:58:40.072903 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d466e51b-87dc-413f-aeb2-f3566a46eeb5","Type":"ContainerDied","Data":"fdbbac4e492980e8a9121b26a7487b983c016efa01fd41436c950b17336cea34"} Feb 18 19:58:40 crc kubenswrapper[4932]: I0218 19:58:40.074804 4932 generic.go:334] "Generic (PLEG): container finished" podID="da761aa0-8599-4aee-9078-ecaf2a04f259" containerID="1b8037c10dd0ae5363e01ddd9be7d861df2dac91332a086e6a5ad5c81c97cf0c" exitCode=0 Feb 18 19:58:40 crc kubenswrapper[4932]: I0218 19:58:40.075027 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"da761aa0-8599-4aee-9078-ecaf2a04f259","Type":"ContainerDied","Data":"1b8037c10dd0ae5363e01ddd9be7d861df2dac91332a086e6a5ad5c81c97cf0c"} Feb 18 19:58:41 crc kubenswrapper[4932]: I0218 19:58:41.094272 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d466e51b-87dc-413f-aeb2-f3566a46eeb5","Type":"ContainerStarted","Data":"660ff38fc3ee70e8c08d06e92bb83529e53d107b967135ac9f4e35aec18b3c1f"} Feb 18 19:58:41 crc kubenswrapper[4932]: I0218 19:58:41.096540 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 18 19:58:41 crc kubenswrapper[4932]: I0218 19:58:41.100605 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"da761aa0-8599-4aee-9078-ecaf2a04f259","Type":"ContainerStarted","Data":"18213112e6dd7afd3658f77b264117ddcd17389ac45045600896335fdb1ba2bd"} Feb 18 19:58:41 crc kubenswrapper[4932]: I0218 19:58:41.100848 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:41 crc kubenswrapper[4932]: I0218 19:58:41.127486 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.12745948 podStartE2EDuration="38.12745948s" podCreationTimestamp="2026-02-18 19:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:58:41.122786014 +0000 UTC m=+1484.704740869" watchObservedRunningTime="2026-02-18 19:58:41.12745948 +0000 UTC m=+1484.709414325" Feb 18 19:58:41 crc kubenswrapper[4932]: I0218 19:58:41.157398 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.157377327 podStartE2EDuration="37.157377327s" podCreationTimestamp="2026-02-18 19:58:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:58:41.147807898 +0000 UTC m=+1484.729762743" watchObservedRunningTime="2026-02-18 19:58:41.157377327 +0000 UTC m=+1484.739332172" Feb 18 19:58:46 crc kubenswrapper[4932]: I0218 19:58:46.871386 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn"] Feb 18 19:58:46 crc kubenswrapper[4932]: E0218 19:58:46.873649 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b80a13-2da6-4c91-a09d-1e935313a13f" containerName="dnsmasq-dns" Feb 18 19:58:46 crc kubenswrapper[4932]: I0218 19:58:46.876679 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b80a13-2da6-4c91-a09d-1e935313a13f" containerName="dnsmasq-dns" Feb 18 19:58:46 crc kubenswrapper[4932]: E0218 19:58:46.876782 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f91611fc-84cb-4a52-8943-b4a5c7481f45" containerName="init" Feb 18 19:58:46 crc kubenswrapper[4932]: I0218 19:58:46.876849 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f91611fc-84cb-4a52-8943-b4a5c7481f45" containerName="init" Feb 18 19:58:46 crc kubenswrapper[4932]: E0218 19:58:46.876925 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b80a13-2da6-4c91-a09d-1e935313a13f" containerName="init" Feb 18 19:58:46 crc kubenswrapper[4932]: I0218 19:58:46.876989 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b80a13-2da6-4c91-a09d-1e935313a13f" containerName="init" Feb 18 19:58:46 crc kubenswrapper[4932]: E0218 19:58:46.877087 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f91611fc-84cb-4a52-8943-b4a5c7481f45" containerName="dnsmasq-dns" Feb 18 19:58:46 crc kubenswrapper[4932]: I0218 19:58:46.877275 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f91611fc-84cb-4a52-8943-b4a5c7481f45" containerName="dnsmasq-dns" Feb 18 19:58:46 crc kubenswrapper[4932]: I0218 19:58:46.877723 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="96b80a13-2da6-4c91-a09d-1e935313a13f" containerName="dnsmasq-dns" Feb 18 19:58:46 crc kubenswrapper[4932]: I0218 19:58:46.877833 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f91611fc-84cb-4a52-8943-b4a5c7481f45" containerName="dnsmasq-dns" Feb 18 19:58:46 crc kubenswrapper[4932]: I0218 19:58:46.878750 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" Feb 18 19:58:46 crc kubenswrapper[4932]: I0218 19:58:46.881077 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:58:46 crc kubenswrapper[4932]: I0218 19:58:46.881468 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vjvmw" Feb 18 19:58:46 crc kubenswrapper[4932]: I0218 19:58:46.881656 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:58:46 crc kubenswrapper[4932]: I0218 19:58:46.881557 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:58:46 crc kubenswrapper[4932]: I0218 19:58:46.921280 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn"] Feb 18 19:58:46 crc kubenswrapper[4932]: I0218 19:58:46.973148 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn\" (UID: \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" Feb 18 19:58:46 crc kubenswrapper[4932]: I0218 19:58:46.973357 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn\" (UID: \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" Feb 18 19:58:46 crc kubenswrapper[4932]: I0218 19:58:46.973489 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn\" (UID: \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" Feb 18 19:58:46 crc kubenswrapper[4932]: I0218 19:58:46.973677 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xcv5\" (UniqueName: \"kubernetes.io/projected/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-kube-api-access-2xcv5\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn\" (UID: \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" Feb 18 19:58:47 crc kubenswrapper[4932]: I0218 19:58:47.075262 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn\" (UID: \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" Feb 18 19:58:47 crc kubenswrapper[4932]: I0218 19:58:47.075337 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn\" (UID: \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" Feb 18 19:58:47 crc kubenswrapper[4932]: I0218 19:58:47.075417 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn\" (UID: \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" Feb 18 19:58:47 crc kubenswrapper[4932]: I0218 19:58:47.075540 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xcv5\" (UniqueName: \"kubernetes.io/projected/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-kube-api-access-2xcv5\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn\" (UID: \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" Feb 18 19:58:47 crc kubenswrapper[4932]: I0218 19:58:47.081600 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn\" (UID: \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" Feb 18 19:58:47 crc kubenswrapper[4932]: I0218 19:58:47.082302 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn\" (UID: \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" Feb 18 19:58:47 crc kubenswrapper[4932]: I0218 19:58:47.085739 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn\" (UID: \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" Feb 18 19:58:47 crc kubenswrapper[4932]: I0218 19:58:47.094234 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xcv5\" (UniqueName: \"kubernetes.io/projected/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-kube-api-access-2xcv5\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn\" (UID: \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" Feb 18 19:58:47 crc kubenswrapper[4932]: I0218 19:58:47.242590 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" Feb 18 19:58:47 crc kubenswrapper[4932]: I0218 19:58:47.914454 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn"] Feb 18 19:58:48 crc kubenswrapper[4932]: I0218 19:58:48.168407 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" event={"ID":"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6","Type":"ContainerStarted","Data":"67773ceef267d72504d994b948ac20b7519b184fb9a0f5ce9474a58a999c5b39"} Feb 18 19:58:54 crc kubenswrapper[4932]: I0218 19:58:54.499585 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 18 19:58:55 crc kubenswrapper[4932]: I0218 19:58:55.117497 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:58:57 crc kubenswrapper[4932]: I0218 19:58:57.278689 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" event={"ID":"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6","Type":"ContainerStarted","Data":"4264b6f8a7461be6203ec28a9363259faa2db962dc1de52b367238e88dcc3b36"} Feb 18 19:58:57 crc kubenswrapper[4932]: I0218 19:58:57.299595 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" podStartSLOduration=2.462644248 podStartE2EDuration="11.299572454s" podCreationTimestamp="2026-02-18 19:58:46 +0000 UTC" firstStartedPulling="2026-02-18 19:58:47.927967854 +0000 UTC m=+1491.509922699" lastFinishedPulling="2026-02-18 19:58:56.76489606 +0000 UTC m=+1500.346850905" observedRunningTime="2026-02-18 19:58:57.298673552 +0000 UTC m=+1500.880628417" watchObservedRunningTime="2026-02-18 19:58:57.299572454 +0000 UTC m=+1500.881527299" Feb 18 19:58:57 crc kubenswrapper[4932]: I0218 19:58:57.606576 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:58:57 crc kubenswrapper[4932]: I0218 19:58:57.606902 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:59:02 crc kubenswrapper[4932]: I0218 19:59:02.853473 4932 scope.go:117] "RemoveContainer" containerID="0c56a84dec06134e2f4b962a1631f1595e0dce10e33a951ccd5303bade9b2a6e" Feb 18 19:59:02 crc kubenswrapper[4932]: I0218 19:59:02.882010 4932 scope.go:117] "RemoveContainer" containerID="f9f90dc57da26de1688aea88788204ac610c6fa3970ee4965c6add216640da6a" Feb 18 19:59:02 crc kubenswrapper[4932]: I0218 19:59:02.936248 4932 scope.go:117] "RemoveContainer" containerID="da426b82651806673889b52158bea2dd7d720c322fbc355879403c25885c3ec1" Feb 18 19:59:07 crc kubenswrapper[4932]: I0218 19:59:07.392951 4932 generic.go:334] "Generic (PLEG): container finished" podID="b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6" containerID="4264b6f8a7461be6203ec28a9363259faa2db962dc1de52b367238e88dcc3b36" exitCode=0 Feb 18 19:59:07 crc kubenswrapper[4932]: I0218 19:59:07.393693 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" event={"ID":"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6","Type":"ContainerDied","Data":"4264b6f8a7461be6203ec28a9363259faa2db962dc1de52b367238e88dcc3b36"} Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.081641 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.259648 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-ssh-key-openstack-edpm-ipam\") pod \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\" (UID: \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\") " Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.259775 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xcv5\" (UniqueName: \"kubernetes.io/projected/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-kube-api-access-2xcv5\") pod \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\" (UID: \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\") " Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.259921 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-repo-setup-combined-ca-bundle\") pod \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\" (UID: \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\") " Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.260005 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-inventory\") pod \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\" (UID: \"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6\") " Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.269169 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-kube-api-access-2xcv5" (OuterVolumeSpecName: "kube-api-access-2xcv5") pod "b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6" (UID: "b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6"). InnerVolumeSpecName "kube-api-access-2xcv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.269992 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6" (UID: "b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.300009 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6" (UID: "b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.313582 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-inventory" (OuterVolumeSpecName: "inventory") pod "b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6" (UID: "b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.366319 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xcv5\" (UniqueName: \"kubernetes.io/projected/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-kube-api-access-2xcv5\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.366842 4932 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.366926 4932 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.367005 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.443992 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" event={"ID":"b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6","Type":"ContainerDied","Data":"67773ceef267d72504d994b948ac20b7519b184fb9a0f5ce9474a58a999c5b39"} Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.444056 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67773ceef267d72504d994b948ac20b7519b184fb9a0f5ce9474a58a999c5b39" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.444140 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vsscn" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.518402 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4"] Feb 18 19:59:09 crc kubenswrapper[4932]: E0218 19:59:09.518776 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.518794 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.519000 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6fd3bc0-e844-44bf-a82b-e8447a1ea7a6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.519766 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.522201 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.522282 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.522824 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.522919 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vjvmw" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.547916 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4"] Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.677721 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/645d3722-79e7-4b78-a24d-2f5eca6c2714-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zc8c4\" (UID: \"645d3722-79e7-4b78-a24d-2f5eca6c2714\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.677933 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/645d3722-79e7-4b78-a24d-2f5eca6c2714-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zc8c4\" (UID: \"645d3722-79e7-4b78-a24d-2f5eca6c2714\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.678217 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqvcz\" (UniqueName: \"kubernetes.io/projected/645d3722-79e7-4b78-a24d-2f5eca6c2714-kube-api-access-cqvcz\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zc8c4\" (UID: \"645d3722-79e7-4b78-a24d-2f5eca6c2714\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.780913 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqvcz\" (UniqueName: \"kubernetes.io/projected/645d3722-79e7-4b78-a24d-2f5eca6c2714-kube-api-access-cqvcz\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zc8c4\" (UID: \"645d3722-79e7-4b78-a24d-2f5eca6c2714\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.781066 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/645d3722-79e7-4b78-a24d-2f5eca6c2714-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zc8c4\" (UID: \"645d3722-79e7-4b78-a24d-2f5eca6c2714\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.782455 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/645d3722-79e7-4b78-a24d-2f5eca6c2714-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zc8c4\" (UID: \"645d3722-79e7-4b78-a24d-2f5eca6c2714\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.784964 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/645d3722-79e7-4b78-a24d-2f5eca6c2714-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zc8c4\" (UID: \"645d3722-79e7-4b78-a24d-2f5eca6c2714\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.785393 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/645d3722-79e7-4b78-a24d-2f5eca6c2714-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zc8c4\" (UID: \"645d3722-79e7-4b78-a24d-2f5eca6c2714\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.797307 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqvcz\" (UniqueName: \"kubernetes.io/projected/645d3722-79e7-4b78-a24d-2f5eca6c2714-kube-api-access-cqvcz\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zc8c4\" (UID: \"645d3722-79e7-4b78-a24d-2f5eca6c2714\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" Feb 18 19:59:09 crc kubenswrapper[4932]: I0218 19:59:09.840948 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" Feb 18 19:59:10 crc kubenswrapper[4932]: I0218 19:59:10.415334 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4"] Feb 18 19:59:10 crc kubenswrapper[4932]: I0218 19:59:10.459831 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" event={"ID":"645d3722-79e7-4b78-a24d-2f5eca6c2714","Type":"ContainerStarted","Data":"1c57b153b6380278224723c713679cd5d8ba06ea370a3ca6688d6ec583ada080"} Feb 18 19:59:11 crc kubenswrapper[4932]: I0218 19:59:11.472527 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" event={"ID":"645d3722-79e7-4b78-a24d-2f5eca6c2714","Type":"ContainerStarted","Data":"61d86c09c955c0e8549b971a80757dba1d7bb26249376d95200ffd9c4ae8a004"} Feb 18 19:59:11 crc kubenswrapper[4932]: I0218 19:59:11.494464 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" podStartSLOduration=2.063098996 podStartE2EDuration="2.494432376s" podCreationTimestamp="2026-02-18 19:59:09 +0000 UTC" firstStartedPulling="2026-02-18 19:59:10.422202187 +0000 UTC m=+1514.004157032" lastFinishedPulling="2026-02-18 19:59:10.853535567 +0000 UTC m=+1514.435490412" observedRunningTime="2026-02-18 19:59:11.487577806 +0000 UTC m=+1515.069532651" watchObservedRunningTime="2026-02-18 19:59:11.494432376 +0000 UTC m=+1515.076387231" Feb 18 19:59:13 crc kubenswrapper[4932]: I0218 19:59:13.493718 4932 generic.go:334] "Generic (PLEG): container finished" podID="645d3722-79e7-4b78-a24d-2f5eca6c2714" containerID="61d86c09c955c0e8549b971a80757dba1d7bb26249376d95200ffd9c4ae8a004" exitCode=0 Feb 18 19:59:13 crc kubenswrapper[4932]: I0218 19:59:13.493800 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" event={"ID":"645d3722-79e7-4b78-a24d-2f5eca6c2714","Type":"ContainerDied","Data":"61d86c09c955c0e8549b971a80757dba1d7bb26249376d95200ffd9c4ae8a004"} Feb 18 19:59:14 crc kubenswrapper[4932]: I0218 19:59:14.995135 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.009798 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqvcz\" (UniqueName: \"kubernetes.io/projected/645d3722-79e7-4b78-a24d-2f5eca6c2714-kube-api-access-cqvcz\") pod \"645d3722-79e7-4b78-a24d-2f5eca6c2714\" (UID: \"645d3722-79e7-4b78-a24d-2f5eca6c2714\") " Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.009869 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/645d3722-79e7-4b78-a24d-2f5eca6c2714-ssh-key-openstack-edpm-ipam\") pod \"645d3722-79e7-4b78-a24d-2f5eca6c2714\" (UID: \"645d3722-79e7-4b78-a24d-2f5eca6c2714\") " Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.009939 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/645d3722-79e7-4b78-a24d-2f5eca6c2714-inventory\") pod \"645d3722-79e7-4b78-a24d-2f5eca6c2714\" (UID: \"645d3722-79e7-4b78-a24d-2f5eca6c2714\") " Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.019371 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/645d3722-79e7-4b78-a24d-2f5eca6c2714-kube-api-access-cqvcz" (OuterVolumeSpecName: "kube-api-access-cqvcz") pod "645d3722-79e7-4b78-a24d-2f5eca6c2714" (UID: "645d3722-79e7-4b78-a24d-2f5eca6c2714"). InnerVolumeSpecName "kube-api-access-cqvcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.041004 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/645d3722-79e7-4b78-a24d-2f5eca6c2714-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "645d3722-79e7-4b78-a24d-2f5eca6c2714" (UID: "645d3722-79e7-4b78-a24d-2f5eca6c2714"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.046143 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/645d3722-79e7-4b78-a24d-2f5eca6c2714-inventory" (OuterVolumeSpecName: "inventory") pod "645d3722-79e7-4b78-a24d-2f5eca6c2714" (UID: "645d3722-79e7-4b78-a24d-2f5eca6c2714"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.113344 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/645d3722-79e7-4b78-a24d-2f5eca6c2714-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.113376 4932 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/645d3722-79e7-4b78-a24d-2f5eca6c2714-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.113385 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqvcz\" (UniqueName: \"kubernetes.io/projected/645d3722-79e7-4b78-a24d-2f5eca6c2714-kube-api-access-cqvcz\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.519654 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" event={"ID":"645d3722-79e7-4b78-a24d-2f5eca6c2714","Type":"ContainerDied","Data":"1c57b153b6380278224723c713679cd5d8ba06ea370a3ca6688d6ec583ada080"} Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.519713 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c57b153b6380278224723c713679cd5d8ba06ea370a3ca6688d6ec583ada080" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.519746 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zc8c4" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.625437 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52"] Feb 18 19:59:15 crc kubenswrapper[4932]: E0218 19:59:15.627250 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="645d3722-79e7-4b78-a24d-2f5eca6c2714" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.627407 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="645d3722-79e7-4b78-a24d-2f5eca6c2714" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.628153 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="645d3722-79e7-4b78-a24d-2f5eca6c2714" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.629652 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.676758 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.677157 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.677281 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.677293 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vjvmw" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.695925 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52"] Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.829505 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mch52\" (UID: \"dbe60214-3673-4c3b-a043-ee483870fe48\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.829621 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mch52\" (UID: \"dbe60214-3673-4c3b-a043-ee483870fe48\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.830011 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mch52\" (UID: \"dbe60214-3673-4c3b-a043-ee483870fe48\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.830481 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkgqk\" (UniqueName: \"kubernetes.io/projected/dbe60214-3673-4c3b-a043-ee483870fe48-kube-api-access-pkgqk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mch52\" (UID: \"dbe60214-3673-4c3b-a043-ee483870fe48\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.932766 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mch52\" (UID: \"dbe60214-3673-4c3b-a043-ee483870fe48\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.932966 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkgqk\" (UniqueName: \"kubernetes.io/projected/dbe60214-3673-4c3b-a043-ee483870fe48-kube-api-access-pkgqk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mch52\" (UID: \"dbe60214-3673-4c3b-a043-ee483870fe48\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.933067 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mch52\" (UID: \"dbe60214-3673-4c3b-a043-ee483870fe48\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.933167 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mch52\" (UID: \"dbe60214-3673-4c3b-a043-ee483870fe48\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.942035 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mch52\" (UID: \"dbe60214-3673-4c3b-a043-ee483870fe48\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.944296 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mch52\" (UID: \"dbe60214-3673-4c3b-a043-ee483870fe48\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.954581 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mch52\" (UID: \"dbe60214-3673-4c3b-a043-ee483870fe48\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" Feb 18 19:59:15 crc kubenswrapper[4932]: I0218 19:59:15.961906 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkgqk\" (UniqueName: \"kubernetes.io/projected/dbe60214-3673-4c3b-a043-ee483870fe48-kube-api-access-pkgqk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mch52\" (UID: \"dbe60214-3673-4c3b-a043-ee483870fe48\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" Feb 18 19:59:16 crc kubenswrapper[4932]: I0218 19:59:16.010964 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" Feb 18 19:59:16 crc kubenswrapper[4932]: I0218 19:59:16.525711 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52"] Feb 18 19:59:16 crc kubenswrapper[4932]: W0218 19:59:16.531101 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbe60214_3673_4c3b_a043_ee483870fe48.slice/crio-388d5a8d9075b522b3396514316338221be10e04a6b2d65c99ef9f1e91e5c2b3 WatchSource:0}: Error finding container 388d5a8d9075b522b3396514316338221be10e04a6b2d65c99ef9f1e91e5c2b3: Status 404 returned error can't find the container with id 388d5a8d9075b522b3396514316338221be10e04a6b2d65c99ef9f1e91e5c2b3 Feb 18 19:59:17 crc kubenswrapper[4932]: I0218 19:59:17.548245 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" event={"ID":"dbe60214-3673-4c3b-a043-ee483870fe48","Type":"ContainerStarted","Data":"c3b8f5c0d6d86b3ea458a9947928d998d7a190d335a7fcd6011fecfca46d5ad1"} Feb 18 19:59:17 crc kubenswrapper[4932]: I0218 19:59:17.548629 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" event={"ID":"dbe60214-3673-4c3b-a043-ee483870fe48","Type":"ContainerStarted","Data":"388d5a8d9075b522b3396514316338221be10e04a6b2d65c99ef9f1e91e5c2b3"} Feb 18 19:59:17 crc kubenswrapper[4932]: I0218 19:59:17.585510 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" podStartSLOduration=2.198518862 podStartE2EDuration="2.585489558s" podCreationTimestamp="2026-02-18 19:59:15 +0000 UTC" firstStartedPulling="2026-02-18 19:59:16.536797865 +0000 UTC m=+1520.118752710" lastFinishedPulling="2026-02-18 19:59:16.923768561 +0000 UTC m=+1520.505723406" observedRunningTime="2026-02-18 19:59:17.568621428 +0000 UTC m=+1521.150576293" watchObservedRunningTime="2026-02-18 19:59:17.585489558 +0000 UTC m=+1521.167444403" Feb 18 19:59:27 crc kubenswrapper[4932]: I0218 19:59:27.606660 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:59:27 crc kubenswrapper[4932]: I0218 19:59:27.607296 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:59:52 crc kubenswrapper[4932]: I0218 19:59:52.015638 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2ml42"] Feb 18 19:59:52 crc kubenswrapper[4932]: I0218 19:59:52.018711 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2ml42" Feb 18 19:59:52 crc kubenswrapper[4932]: I0218 19:59:52.028749 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ml42"] Feb 18 19:59:52 crc kubenswrapper[4932]: I0218 19:59:52.084877 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-utilities\") pod \"redhat-marketplace-2ml42\" (UID: \"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d\") " pod="openshift-marketplace/redhat-marketplace-2ml42" Feb 18 19:59:52 crc kubenswrapper[4932]: I0218 19:59:52.085258 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8rf5\" (UniqueName: \"kubernetes.io/projected/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-kube-api-access-h8rf5\") pod \"redhat-marketplace-2ml42\" (UID: \"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d\") " pod="openshift-marketplace/redhat-marketplace-2ml42" Feb 18 19:59:52 crc kubenswrapper[4932]: I0218 19:59:52.085422 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-catalog-content\") pod \"redhat-marketplace-2ml42\" (UID: \"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d\") " pod="openshift-marketplace/redhat-marketplace-2ml42" Feb 18 19:59:52 crc kubenswrapper[4932]: I0218 19:59:52.190209 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-utilities\") pod \"redhat-marketplace-2ml42\" (UID: \"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d\") " pod="openshift-marketplace/redhat-marketplace-2ml42" Feb 18 19:59:52 crc kubenswrapper[4932]: I0218 19:59:52.190448 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8rf5\" (UniqueName: \"kubernetes.io/projected/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-kube-api-access-h8rf5\") pod \"redhat-marketplace-2ml42\" (UID: \"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d\") " pod="openshift-marketplace/redhat-marketplace-2ml42" Feb 18 19:59:52 crc kubenswrapper[4932]: I0218 19:59:52.190594 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-catalog-content\") pod \"redhat-marketplace-2ml42\" (UID: \"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d\") " pod="openshift-marketplace/redhat-marketplace-2ml42" Feb 18 19:59:52 crc kubenswrapper[4932]: I0218 19:59:52.191354 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-utilities\") pod \"redhat-marketplace-2ml42\" (UID: \"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d\") " pod="openshift-marketplace/redhat-marketplace-2ml42" Feb 18 19:59:52 crc kubenswrapper[4932]: I0218 19:59:52.192015 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-catalog-content\") pod \"redhat-marketplace-2ml42\" (UID: \"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d\") " pod="openshift-marketplace/redhat-marketplace-2ml42" Feb 18 19:59:52 crc kubenswrapper[4932]: I0218 19:59:52.255530 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8rf5\" (UniqueName: \"kubernetes.io/projected/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-kube-api-access-h8rf5\") pod \"redhat-marketplace-2ml42\" (UID: \"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d\") " pod="openshift-marketplace/redhat-marketplace-2ml42" Feb 18 19:59:52 crc kubenswrapper[4932]: I0218 19:59:52.349481 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2ml42" Feb 18 19:59:52 crc kubenswrapper[4932]: I0218 19:59:52.828127 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ml42"] Feb 18 19:59:52 crc kubenswrapper[4932]: I0218 19:59:52.931825 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ml42" event={"ID":"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d","Type":"ContainerStarted","Data":"645ec31d43647f59f81cb86a1b8ef96d7e32c5b0c176847cf45357ed914898d2"} Feb 18 19:59:53 crc kubenswrapper[4932]: I0218 19:59:53.946851 4932 generic.go:334] "Generic (PLEG): container finished" podID="8ecd50a3-15c9-4b1a-8c77-b1fb4303596d" containerID="f882eadc7a59782637627308960cb1ee779a8f928bf56cc5840ac520f5d219a4" exitCode=0 Feb 18 19:59:53 crc kubenswrapper[4932]: I0218 19:59:53.946931 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ml42" event={"ID":"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d","Type":"ContainerDied","Data":"f882eadc7a59782637627308960cb1ee779a8f928bf56cc5840ac520f5d219a4"} Feb 18 19:59:54 crc kubenswrapper[4932]: I0218 19:59:54.958152 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ml42" event={"ID":"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d","Type":"ContainerStarted","Data":"ac65f965b73ef12472cc016e48da1ec1c719ea417bbeafb7d9d02caeb08345dc"} Feb 18 19:59:55 crc kubenswrapper[4932]: I0218 19:59:55.967563 4932 generic.go:334] "Generic (PLEG): container finished" podID="8ecd50a3-15c9-4b1a-8c77-b1fb4303596d" containerID="ac65f965b73ef12472cc016e48da1ec1c719ea417bbeafb7d9d02caeb08345dc" exitCode=0 Feb 18 19:59:55 crc kubenswrapper[4932]: I0218 19:59:55.967633 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ml42" event={"ID":"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d","Type":"ContainerDied","Data":"ac65f965b73ef12472cc016e48da1ec1c719ea417bbeafb7d9d02caeb08345dc"} Feb 18 19:59:56 crc kubenswrapper[4932]: I0218 19:59:56.981746 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ml42" event={"ID":"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d","Type":"ContainerStarted","Data":"452de4dc305a80acacd17406ff86c4a1d00ce1e1c28fb18b02225b9eef68284c"} Feb 18 19:59:57 crc kubenswrapper[4932]: I0218 19:59:57.013473 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2ml42" podStartSLOduration=3.3086679500000002 podStartE2EDuration="6.013437882s" podCreationTimestamp="2026-02-18 19:59:51 +0000 UTC" firstStartedPulling="2026-02-18 19:59:53.949270572 +0000 UTC m=+1557.531225417" lastFinishedPulling="2026-02-18 19:59:56.654040504 +0000 UTC m=+1560.235995349" observedRunningTime="2026-02-18 19:59:57.003249509 +0000 UTC m=+1560.585204414" watchObservedRunningTime="2026-02-18 19:59:57.013437882 +0000 UTC m=+1560.595392727" Feb 18 19:59:57 crc kubenswrapper[4932]: I0218 19:59:57.606147 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:59:57 crc kubenswrapper[4932]: I0218 19:59:57.606472 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:59:57 crc kubenswrapper[4932]: I0218 19:59:57.606515 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 19:59:57 crc kubenswrapper[4932]: I0218 19:59:57.607139 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:59:57 crc kubenswrapper[4932]: I0218 19:59:57.607215 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" gracePeriod=600 Feb 18 19:59:57 crc kubenswrapper[4932]: E0218 19:59:57.746157 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 19:59:57 crc kubenswrapper[4932]: I0218 19:59:57.992774 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" exitCode=0 Feb 18 19:59:57 crc kubenswrapper[4932]: I0218 19:59:57.992842 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d"} Feb 18 19:59:57 crc kubenswrapper[4932]: I0218 19:59:57.992890 4932 scope.go:117] "RemoveContainer" containerID="691ac26b2e0eb4976dab73dc438ad2163dc0ad731157e8dbe0e2c19541cba856" Feb 18 19:59:57 crc kubenswrapper[4932]: I0218 19:59:57.993583 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 19:59:57 crc kubenswrapper[4932]: E0218 19:59:57.993832 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:00:00 crc kubenswrapper[4932]: I0218 20:00:00.150653 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf"] Feb 18 20:00:00 crc kubenswrapper[4932]: I0218 20:00:00.152676 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" Feb 18 20:00:00 crc kubenswrapper[4932]: I0218 20:00:00.157105 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 20:00:00 crc kubenswrapper[4932]: I0218 20:00:00.157250 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 20:00:00 crc kubenswrapper[4932]: I0218 20:00:00.190157 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf"] Feb 18 20:00:00 crc kubenswrapper[4932]: I0218 20:00:00.251852 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9637eec3-3d3f-435b-9a57-ef318aa5300c-config-volume\") pod \"collect-profiles-29524080-w6qbf\" (UID: \"9637eec3-3d3f-435b-9a57-ef318aa5300c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" Feb 18 20:00:00 crc kubenswrapper[4932]: I0218 20:00:00.252383 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9637eec3-3d3f-435b-9a57-ef318aa5300c-secret-volume\") pod \"collect-profiles-29524080-w6qbf\" (UID: \"9637eec3-3d3f-435b-9a57-ef318aa5300c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" Feb 18 20:00:00 crc kubenswrapper[4932]: I0218 20:00:00.252495 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9nrh\" (UniqueName: \"kubernetes.io/projected/9637eec3-3d3f-435b-9a57-ef318aa5300c-kube-api-access-b9nrh\") pod \"collect-profiles-29524080-w6qbf\" (UID: \"9637eec3-3d3f-435b-9a57-ef318aa5300c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" Feb 18 20:00:00 crc kubenswrapper[4932]: I0218 20:00:00.354827 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9637eec3-3d3f-435b-9a57-ef318aa5300c-config-volume\") pod \"collect-profiles-29524080-w6qbf\" (UID: \"9637eec3-3d3f-435b-9a57-ef318aa5300c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" Feb 18 20:00:00 crc kubenswrapper[4932]: I0218 20:00:00.354916 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9637eec3-3d3f-435b-9a57-ef318aa5300c-secret-volume\") pod \"collect-profiles-29524080-w6qbf\" (UID: \"9637eec3-3d3f-435b-9a57-ef318aa5300c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" Feb 18 20:00:00 crc kubenswrapper[4932]: I0218 20:00:00.354963 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9nrh\" (UniqueName: \"kubernetes.io/projected/9637eec3-3d3f-435b-9a57-ef318aa5300c-kube-api-access-b9nrh\") pod \"collect-profiles-29524080-w6qbf\" (UID: \"9637eec3-3d3f-435b-9a57-ef318aa5300c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" Feb 18 20:00:00 crc kubenswrapper[4932]: I0218 20:00:00.357039 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9637eec3-3d3f-435b-9a57-ef318aa5300c-config-volume\") pod \"collect-profiles-29524080-w6qbf\" (UID: \"9637eec3-3d3f-435b-9a57-ef318aa5300c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" Feb 18 20:00:00 crc kubenswrapper[4932]: I0218 20:00:00.365592 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9637eec3-3d3f-435b-9a57-ef318aa5300c-secret-volume\") pod \"collect-profiles-29524080-w6qbf\" (UID: \"9637eec3-3d3f-435b-9a57-ef318aa5300c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" Feb 18 20:00:00 crc kubenswrapper[4932]: I0218 20:00:00.375490 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9nrh\" (UniqueName: \"kubernetes.io/projected/9637eec3-3d3f-435b-9a57-ef318aa5300c-kube-api-access-b9nrh\") pod \"collect-profiles-29524080-w6qbf\" (UID: \"9637eec3-3d3f-435b-9a57-ef318aa5300c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" Feb 18 20:00:00 crc kubenswrapper[4932]: I0218 20:00:00.498666 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" Feb 18 20:00:00 crc kubenswrapper[4932]: I0218 20:00:00.962974 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf"] Feb 18 20:00:00 crc kubenswrapper[4932]: W0218 20:00:00.973073 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9637eec3_3d3f_435b_9a57_ef318aa5300c.slice/crio-4d7ad3acbe1c3361d0180ae0bbfa97700a5e076cc7baa5d48bab5f20885679e4 WatchSource:0}: Error finding container 4d7ad3acbe1c3361d0180ae0bbfa97700a5e076cc7baa5d48bab5f20885679e4: Status 404 returned error can't find the container with id 4d7ad3acbe1c3361d0180ae0bbfa97700a5e076cc7baa5d48bab5f20885679e4 Feb 18 20:00:01 crc kubenswrapper[4932]: I0218 20:00:01.026999 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" event={"ID":"9637eec3-3d3f-435b-9a57-ef318aa5300c","Type":"ContainerStarted","Data":"4d7ad3acbe1c3361d0180ae0bbfa97700a5e076cc7baa5d48bab5f20885679e4"} Feb 18 20:00:02 crc kubenswrapper[4932]: I0218 20:00:02.038962 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" event={"ID":"9637eec3-3d3f-435b-9a57-ef318aa5300c","Type":"ContainerStarted","Data":"a9f13f16fae2f188590028710fb520ed99f739785e726a38525e8fd3c5b3e49f"} Feb 18 20:00:02 crc kubenswrapper[4932]: I0218 20:00:02.069295 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" podStartSLOduration=2.069266996 podStartE2EDuration="2.069266996s" podCreationTimestamp="2026-02-18 20:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 20:00:02.056301134 +0000 UTC m=+1565.638256019" watchObservedRunningTime="2026-02-18 20:00:02.069266996 +0000 UTC m=+1565.651221841" Feb 18 20:00:02 crc kubenswrapper[4932]: I0218 20:00:02.351581 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2ml42" Feb 18 20:00:02 crc kubenswrapper[4932]: I0218 20:00:02.351928 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2ml42" Feb 18 20:00:02 crc kubenswrapper[4932]: I0218 20:00:02.424213 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2ml42" Feb 18 20:00:03 crc kubenswrapper[4932]: I0218 20:00:03.054514 4932 generic.go:334] "Generic (PLEG): container finished" podID="9637eec3-3d3f-435b-9a57-ef318aa5300c" containerID="a9f13f16fae2f188590028710fb520ed99f739785e726a38525e8fd3c5b3e49f" exitCode=0 Feb 18 20:00:03 crc kubenswrapper[4932]: I0218 20:00:03.055812 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" event={"ID":"9637eec3-3d3f-435b-9a57-ef318aa5300c","Type":"ContainerDied","Data":"a9f13f16fae2f188590028710fb520ed99f739785e726a38525e8fd3c5b3e49f"} Feb 18 20:00:03 crc kubenswrapper[4932]: I0218 20:00:03.150114 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2ml42" Feb 18 20:00:03 crc kubenswrapper[4932]: I0218 20:00:03.152113 4932 scope.go:117] "RemoveContainer" containerID="502e6556feede81a431352bd255101dc0919dfeb0d3696054c3aff0523a4cd61" Feb 18 20:00:03 crc kubenswrapper[4932]: I0218 20:00:03.229918 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ml42"] Feb 18 20:00:04 crc kubenswrapper[4932]: I0218 20:00:04.458159 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" Feb 18 20:00:04 crc kubenswrapper[4932]: I0218 20:00:04.549849 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9nrh\" (UniqueName: \"kubernetes.io/projected/9637eec3-3d3f-435b-9a57-ef318aa5300c-kube-api-access-b9nrh\") pod \"9637eec3-3d3f-435b-9a57-ef318aa5300c\" (UID: \"9637eec3-3d3f-435b-9a57-ef318aa5300c\") " Feb 18 20:00:04 crc kubenswrapper[4932]: I0218 20:00:04.549940 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9637eec3-3d3f-435b-9a57-ef318aa5300c-config-volume\") pod \"9637eec3-3d3f-435b-9a57-ef318aa5300c\" (UID: \"9637eec3-3d3f-435b-9a57-ef318aa5300c\") " Feb 18 20:00:04 crc kubenswrapper[4932]: I0218 20:00:04.549971 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9637eec3-3d3f-435b-9a57-ef318aa5300c-secret-volume\") pod \"9637eec3-3d3f-435b-9a57-ef318aa5300c\" (UID: \"9637eec3-3d3f-435b-9a57-ef318aa5300c\") " Feb 18 20:00:04 crc kubenswrapper[4932]: I0218 20:00:04.551396 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9637eec3-3d3f-435b-9a57-ef318aa5300c-config-volume" (OuterVolumeSpecName: "config-volume") pod "9637eec3-3d3f-435b-9a57-ef318aa5300c" (UID: "9637eec3-3d3f-435b-9a57-ef318aa5300c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:00:04 crc kubenswrapper[4932]: I0218 20:00:04.556503 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9637eec3-3d3f-435b-9a57-ef318aa5300c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9637eec3-3d3f-435b-9a57-ef318aa5300c" (UID: "9637eec3-3d3f-435b-9a57-ef318aa5300c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:04 crc kubenswrapper[4932]: I0218 20:00:04.570787 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9637eec3-3d3f-435b-9a57-ef318aa5300c-kube-api-access-b9nrh" (OuterVolumeSpecName: "kube-api-access-b9nrh") pod "9637eec3-3d3f-435b-9a57-ef318aa5300c" (UID: "9637eec3-3d3f-435b-9a57-ef318aa5300c"). InnerVolumeSpecName "kube-api-access-b9nrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:00:04 crc kubenswrapper[4932]: I0218 20:00:04.652763 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9nrh\" (UniqueName: \"kubernetes.io/projected/9637eec3-3d3f-435b-9a57-ef318aa5300c-kube-api-access-b9nrh\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:04 crc kubenswrapper[4932]: I0218 20:00:04.652804 4932 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9637eec3-3d3f-435b-9a57-ef318aa5300c-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:04 crc kubenswrapper[4932]: I0218 20:00:04.652831 4932 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9637eec3-3d3f-435b-9a57-ef318aa5300c-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:05 crc kubenswrapper[4932]: I0218 20:00:05.126463 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" Feb 18 20:00:05 crc kubenswrapper[4932]: I0218 20:00:05.126802 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf" event={"ID":"9637eec3-3d3f-435b-9a57-ef318aa5300c","Type":"ContainerDied","Data":"4d7ad3acbe1c3361d0180ae0bbfa97700a5e076cc7baa5d48bab5f20885679e4"} Feb 18 20:00:05 crc kubenswrapper[4932]: I0218 20:00:05.126828 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d7ad3acbe1c3361d0180ae0bbfa97700a5e076cc7baa5d48bab5f20885679e4" Feb 18 20:00:05 crc kubenswrapper[4932]: I0218 20:00:05.126569 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2ml42" podUID="8ecd50a3-15c9-4b1a-8c77-b1fb4303596d" containerName="registry-server" containerID="cri-o://452de4dc305a80acacd17406ff86c4a1d00ce1e1c28fb18b02225b9eef68284c" gracePeriod=2 Feb 18 20:00:05 crc kubenswrapper[4932]: E0218 20:00:05.289459 4932 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9637eec3_3d3f_435b_9a57_ef318aa5300c.slice\": RecentStats: unable to find data in memory cache]" Feb 18 20:00:06 crc kubenswrapper[4932]: I0218 20:00:06.142510 4932 generic.go:334] "Generic (PLEG): container finished" podID="8ecd50a3-15c9-4b1a-8c77-b1fb4303596d" containerID="452de4dc305a80acacd17406ff86c4a1d00ce1e1c28fb18b02225b9eef68284c" exitCode=0 Feb 18 20:00:06 crc kubenswrapper[4932]: I0218 20:00:06.142822 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ml42" event={"ID":"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d","Type":"ContainerDied","Data":"452de4dc305a80acacd17406ff86c4a1d00ce1e1c28fb18b02225b9eef68284c"} Feb 18 20:00:06 crc kubenswrapper[4932]: I0218 20:00:06.466676 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2ml42" Feb 18 20:00:06 crc kubenswrapper[4932]: I0218 20:00:06.599516 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-catalog-content\") pod \"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d\" (UID: \"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d\") " Feb 18 20:00:06 crc kubenswrapper[4932]: I0218 20:00:06.599611 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8rf5\" (UniqueName: \"kubernetes.io/projected/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-kube-api-access-h8rf5\") pod \"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d\" (UID: \"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d\") " Feb 18 20:00:06 crc kubenswrapper[4932]: I0218 20:00:06.599763 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-utilities\") pod \"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d\" (UID: \"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d\") " Feb 18 20:00:06 crc kubenswrapper[4932]: I0218 20:00:06.600824 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-utilities" (OuterVolumeSpecName: "utilities") pod "8ecd50a3-15c9-4b1a-8c77-b1fb4303596d" (UID: "8ecd50a3-15c9-4b1a-8c77-b1fb4303596d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:00:06 crc kubenswrapper[4932]: I0218 20:00:06.619079 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-kube-api-access-h8rf5" (OuterVolumeSpecName: "kube-api-access-h8rf5") pod "8ecd50a3-15c9-4b1a-8c77-b1fb4303596d" (UID: "8ecd50a3-15c9-4b1a-8c77-b1fb4303596d"). InnerVolumeSpecName "kube-api-access-h8rf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:00:06 crc kubenswrapper[4932]: I0218 20:00:06.623617 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ecd50a3-15c9-4b1a-8c77-b1fb4303596d" (UID: "8ecd50a3-15c9-4b1a-8c77-b1fb4303596d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:00:06 crc kubenswrapper[4932]: I0218 20:00:06.701954 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:06 crc kubenswrapper[4932]: I0218 20:00:06.701988 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:06 crc kubenswrapper[4932]: I0218 20:00:06.702011 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8rf5\" (UniqueName: \"kubernetes.io/projected/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d-kube-api-access-h8rf5\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:07 crc kubenswrapper[4932]: I0218 20:00:07.157988 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ml42" event={"ID":"8ecd50a3-15c9-4b1a-8c77-b1fb4303596d","Type":"ContainerDied","Data":"645ec31d43647f59f81cb86a1b8ef96d7e32c5b0c176847cf45357ed914898d2"} Feb 18 20:00:07 crc kubenswrapper[4932]: I0218 20:00:07.158414 4932 scope.go:117] "RemoveContainer" containerID="452de4dc305a80acacd17406ff86c4a1d00ce1e1c28fb18b02225b9eef68284c" Feb 18 20:00:07 crc kubenswrapper[4932]: I0218 20:00:07.158052 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2ml42" Feb 18 20:00:07 crc kubenswrapper[4932]: I0218 20:00:07.189207 4932 scope.go:117] "RemoveContainer" containerID="ac65f965b73ef12472cc016e48da1ec1c719ea417bbeafb7d9d02caeb08345dc" Feb 18 20:00:07 crc kubenswrapper[4932]: I0218 20:00:07.210861 4932 scope.go:117] "RemoveContainer" containerID="f882eadc7a59782637627308960cb1ee779a8f928bf56cc5840ac520f5d219a4" Feb 18 20:00:07 crc kubenswrapper[4932]: I0218 20:00:07.222123 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ml42"] Feb 18 20:00:07 crc kubenswrapper[4932]: I0218 20:00:07.230939 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ml42"] Feb 18 20:00:09 crc kubenswrapper[4932]: I0218 20:00:09.205513 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ecd50a3-15c9-4b1a-8c77-b1fb4303596d" path="/var/lib/kubelet/pods/8ecd50a3-15c9-4b1a-8c77-b1fb4303596d/volumes" Feb 18 20:00:10 crc kubenswrapper[4932]: I0218 20:00:10.179724 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:00:10 crc kubenswrapper[4932]: E0218 20:00:10.180274 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:00:24 crc kubenswrapper[4932]: I0218 20:00:24.180699 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:00:24 crc kubenswrapper[4932]: E0218 20:00:24.181823 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:00:35 crc kubenswrapper[4932]: I0218 20:00:35.180352 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:00:35 crc kubenswrapper[4932]: E0218 20:00:35.181348 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:00:49 crc kubenswrapper[4932]: I0218 20:00:49.179507 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:00:49 crc kubenswrapper[4932]: E0218 20:00:49.180758 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.162965 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29524081-jsnmw"] Feb 18 20:01:00 crc kubenswrapper[4932]: E0218 20:01:00.164013 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ecd50a3-15c9-4b1a-8c77-b1fb4303596d" containerName="extract-utilities" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.164030 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ecd50a3-15c9-4b1a-8c77-b1fb4303596d" containerName="extract-utilities" Feb 18 20:01:00 crc kubenswrapper[4932]: E0218 20:01:00.164048 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9637eec3-3d3f-435b-9a57-ef318aa5300c" containerName="collect-profiles" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.164056 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="9637eec3-3d3f-435b-9a57-ef318aa5300c" containerName="collect-profiles" Feb 18 20:01:00 crc kubenswrapper[4932]: E0218 20:01:00.164076 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ecd50a3-15c9-4b1a-8c77-b1fb4303596d" containerName="extract-content" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.164084 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ecd50a3-15c9-4b1a-8c77-b1fb4303596d" containerName="extract-content" Feb 18 20:01:00 crc kubenswrapper[4932]: E0218 20:01:00.164100 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ecd50a3-15c9-4b1a-8c77-b1fb4303596d" containerName="registry-server" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.164108 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ecd50a3-15c9-4b1a-8c77-b1fb4303596d" containerName="registry-server" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.164376 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ecd50a3-15c9-4b1a-8c77-b1fb4303596d" containerName="registry-server" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.164410 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="9637eec3-3d3f-435b-9a57-ef318aa5300c" containerName="collect-profiles" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.165236 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524081-jsnmw" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.175438 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524081-jsnmw"] Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.249671 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-fernet-keys\") pod \"keystone-cron-29524081-jsnmw\" (UID: \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\") " pod="openstack/keystone-cron-29524081-jsnmw" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.250345 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-combined-ca-bundle\") pod \"keystone-cron-29524081-jsnmw\" (UID: \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\") " pod="openstack/keystone-cron-29524081-jsnmw" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.250419 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qq8g\" (UniqueName: \"kubernetes.io/projected/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-kube-api-access-7qq8g\") pod \"keystone-cron-29524081-jsnmw\" (UID: \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\") " pod="openstack/keystone-cron-29524081-jsnmw" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.250474 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-config-data\") pod \"keystone-cron-29524081-jsnmw\" (UID: \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\") " pod="openstack/keystone-cron-29524081-jsnmw" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.352628 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-fernet-keys\") pod \"keystone-cron-29524081-jsnmw\" (UID: \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\") " pod="openstack/keystone-cron-29524081-jsnmw" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.352791 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-combined-ca-bundle\") pod \"keystone-cron-29524081-jsnmw\" (UID: \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\") " pod="openstack/keystone-cron-29524081-jsnmw" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.352835 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qq8g\" (UniqueName: \"kubernetes.io/projected/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-kube-api-access-7qq8g\") pod \"keystone-cron-29524081-jsnmw\" (UID: \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\") " pod="openstack/keystone-cron-29524081-jsnmw" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.352889 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-config-data\") pod \"keystone-cron-29524081-jsnmw\" (UID: \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\") " pod="openstack/keystone-cron-29524081-jsnmw" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.363563 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-fernet-keys\") pod \"keystone-cron-29524081-jsnmw\" (UID: \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\") " pod="openstack/keystone-cron-29524081-jsnmw" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.365800 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-config-data\") pod \"keystone-cron-29524081-jsnmw\" (UID: \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\") " pod="openstack/keystone-cron-29524081-jsnmw" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.366514 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-combined-ca-bundle\") pod \"keystone-cron-29524081-jsnmw\" (UID: \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\") " pod="openstack/keystone-cron-29524081-jsnmw" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.376456 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qq8g\" (UniqueName: \"kubernetes.io/projected/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-kube-api-access-7qq8g\") pod \"keystone-cron-29524081-jsnmw\" (UID: \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\") " pod="openstack/keystone-cron-29524081-jsnmw" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.493154 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524081-jsnmw" Feb 18 20:01:00 crc kubenswrapper[4932]: I0218 20:01:00.959751 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524081-jsnmw"] Feb 18 20:01:01 crc kubenswrapper[4932]: I0218 20:01:01.818036 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524081-jsnmw" event={"ID":"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a","Type":"ContainerStarted","Data":"4b4e199b09552b73ff8acfbf5edd85042cafa10c4171859d3dfd7a31f2670d3d"} Feb 18 20:01:01 crc kubenswrapper[4932]: I0218 20:01:01.818096 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524081-jsnmw" event={"ID":"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a","Type":"ContainerStarted","Data":"59f3482ad312ba2973909cb9359693447542d72ec11c1a10a1a59349c104baa5"} Feb 18 20:01:01 crc kubenswrapper[4932]: I0218 20:01:01.835669 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29524081-jsnmw" podStartSLOduration=1.835652573 podStartE2EDuration="1.835652573s" podCreationTimestamp="2026-02-18 20:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 20:01:01.832671829 +0000 UTC m=+1625.414626714" watchObservedRunningTime="2026-02-18 20:01:01.835652573 +0000 UTC m=+1625.417607428" Feb 18 20:01:02 crc kubenswrapper[4932]: I0218 20:01:02.179239 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:01:02 crc kubenswrapper[4932]: E0218 20:01:02.179482 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:01:03 crc kubenswrapper[4932]: I0218 20:01:03.271633 4932 scope.go:117] "RemoveContainer" containerID="2dd4d65476d1505ac595577a77e37ccd6902dc5b61d39daf8b0813fba6426e5c" Feb 18 20:01:03 crc kubenswrapper[4932]: I0218 20:01:03.305634 4932 scope.go:117] "RemoveContainer" containerID="b1963dc8bdedaa6e9c39260e4aa454ec9b1f122ff3e931be78b28e85782c2717" Feb 18 20:01:04 crc kubenswrapper[4932]: I0218 20:01:04.848886 4932 generic.go:334] "Generic (PLEG): container finished" podID="0bba22f9-3b80-430c-9ef5-d8ca59db0d8a" containerID="4b4e199b09552b73ff8acfbf5edd85042cafa10c4171859d3dfd7a31f2670d3d" exitCode=0 Feb 18 20:01:04 crc kubenswrapper[4932]: I0218 20:01:04.848974 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524081-jsnmw" event={"ID":"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a","Type":"ContainerDied","Data":"4b4e199b09552b73ff8acfbf5edd85042cafa10c4171859d3dfd7a31f2670d3d"} Feb 18 20:01:06 crc kubenswrapper[4932]: I0218 20:01:06.276166 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524081-jsnmw" Feb 18 20:01:06 crc kubenswrapper[4932]: I0218 20:01:06.385906 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-config-data\") pod \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\" (UID: \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\") " Feb 18 20:01:06 crc kubenswrapper[4932]: I0218 20:01:06.386371 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-combined-ca-bundle\") pod \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\" (UID: \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\") " Feb 18 20:01:06 crc kubenswrapper[4932]: I0218 20:01:06.386936 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-fernet-keys\") pod \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\" (UID: \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\") " Feb 18 20:01:06 crc kubenswrapper[4932]: I0218 20:01:06.386982 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qq8g\" (UniqueName: \"kubernetes.io/projected/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-kube-api-access-7qq8g\") pod \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\" (UID: \"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a\") " Feb 18 20:01:06 crc kubenswrapper[4932]: I0218 20:01:06.393137 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-kube-api-access-7qq8g" (OuterVolumeSpecName: "kube-api-access-7qq8g") pod "0bba22f9-3b80-430c-9ef5-d8ca59db0d8a" (UID: "0bba22f9-3b80-430c-9ef5-d8ca59db0d8a"). InnerVolumeSpecName "kube-api-access-7qq8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:01:06 crc kubenswrapper[4932]: I0218 20:01:06.393489 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0bba22f9-3b80-430c-9ef5-d8ca59db0d8a" (UID: "0bba22f9-3b80-430c-9ef5-d8ca59db0d8a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:01:06 crc kubenswrapper[4932]: I0218 20:01:06.414905 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0bba22f9-3b80-430c-9ef5-d8ca59db0d8a" (UID: "0bba22f9-3b80-430c-9ef5-d8ca59db0d8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:01:06 crc kubenswrapper[4932]: I0218 20:01:06.441808 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-config-data" (OuterVolumeSpecName: "config-data") pod "0bba22f9-3b80-430c-9ef5-d8ca59db0d8a" (UID: "0bba22f9-3b80-430c-9ef5-d8ca59db0d8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:01:06 crc kubenswrapper[4932]: I0218 20:01:06.490030 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qq8g\" (UniqueName: \"kubernetes.io/projected/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-kube-api-access-7qq8g\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:06 crc kubenswrapper[4932]: I0218 20:01:06.490073 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:06 crc kubenswrapper[4932]: I0218 20:01:06.490087 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:06 crc kubenswrapper[4932]: I0218 20:01:06.490098 4932 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0bba22f9-3b80-430c-9ef5-d8ca59db0d8a-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:06 crc kubenswrapper[4932]: I0218 20:01:06.874720 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524081-jsnmw" event={"ID":"0bba22f9-3b80-430c-9ef5-d8ca59db0d8a","Type":"ContainerDied","Data":"59f3482ad312ba2973909cb9359693447542d72ec11c1a10a1a59349c104baa5"} Feb 18 20:01:06 crc kubenswrapper[4932]: I0218 20:01:06.874766 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59f3482ad312ba2973909cb9359693447542d72ec11c1a10a1a59349c104baa5" Feb 18 20:01:06 crc kubenswrapper[4932]: I0218 20:01:06.874771 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524081-jsnmw" Feb 18 20:01:17 crc kubenswrapper[4932]: I0218 20:01:17.193819 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:01:17 crc kubenswrapper[4932]: E0218 20:01:17.194903 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:01:28 crc kubenswrapper[4932]: I0218 20:01:28.178728 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:01:28 crc kubenswrapper[4932]: E0218 20:01:28.179660 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:01:40 crc kubenswrapper[4932]: I0218 20:01:40.179044 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:01:40 crc kubenswrapper[4932]: E0218 20:01:40.179927 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:01:52 crc kubenswrapper[4932]: I0218 20:01:52.179637 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:01:52 crc kubenswrapper[4932]: E0218 20:01:52.180933 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:02:03 crc kubenswrapper[4932]: I0218 20:02:03.416843 4932 scope.go:117] "RemoveContainer" containerID="f581b8c9ce44e42d3ff03f376a0f68bc8c6d3dd65d58f6d7b80411f3452dd5a6" Feb 18 20:02:05 crc kubenswrapper[4932]: I0218 20:02:05.180981 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:02:05 crc kubenswrapper[4932]: E0218 20:02:05.181611 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:02:17 crc kubenswrapper[4932]: I0218 20:02:17.187718 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:02:17 crc kubenswrapper[4932]: E0218 20:02:17.188639 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:02:20 crc kubenswrapper[4932]: I0218 20:02:20.007492 4932 generic.go:334] "Generic (PLEG): container finished" podID="dbe60214-3673-4c3b-a043-ee483870fe48" containerID="c3b8f5c0d6d86b3ea458a9947928d998d7a190d335a7fcd6011fecfca46d5ad1" exitCode=0 Feb 18 20:02:20 crc kubenswrapper[4932]: I0218 20:02:20.007571 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" event={"ID":"dbe60214-3673-4c3b-a043-ee483870fe48","Type":"ContainerDied","Data":"c3b8f5c0d6d86b3ea458a9947928d998d7a190d335a7fcd6011fecfca46d5ad1"} Feb 18 20:02:21 crc kubenswrapper[4932]: I0218 20:02:21.556733 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" Feb 18 20:02:21 crc kubenswrapper[4932]: I0218 20:02:21.664306 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-ssh-key-openstack-edpm-ipam\") pod \"dbe60214-3673-4c3b-a043-ee483870fe48\" (UID: \"dbe60214-3673-4c3b-a043-ee483870fe48\") " Feb 18 20:02:21 crc kubenswrapper[4932]: I0218 20:02:21.664355 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-inventory\") pod \"dbe60214-3673-4c3b-a043-ee483870fe48\" (UID: \"dbe60214-3673-4c3b-a043-ee483870fe48\") " Feb 18 20:02:21 crc kubenswrapper[4932]: I0218 20:02:21.664500 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkgqk\" (UniqueName: \"kubernetes.io/projected/dbe60214-3673-4c3b-a043-ee483870fe48-kube-api-access-pkgqk\") pod \"dbe60214-3673-4c3b-a043-ee483870fe48\" (UID: \"dbe60214-3673-4c3b-a043-ee483870fe48\") " Feb 18 20:02:21 crc kubenswrapper[4932]: I0218 20:02:21.664545 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-bootstrap-combined-ca-bundle\") pod \"dbe60214-3673-4c3b-a043-ee483870fe48\" (UID: \"dbe60214-3673-4c3b-a043-ee483870fe48\") " Feb 18 20:02:21 crc kubenswrapper[4932]: I0218 20:02:21.669935 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbe60214-3673-4c3b-a043-ee483870fe48-kube-api-access-pkgqk" (OuterVolumeSpecName: "kube-api-access-pkgqk") pod "dbe60214-3673-4c3b-a043-ee483870fe48" (UID: "dbe60214-3673-4c3b-a043-ee483870fe48"). InnerVolumeSpecName "kube-api-access-pkgqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:02:21 crc kubenswrapper[4932]: I0218 20:02:21.670480 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "dbe60214-3673-4c3b-a043-ee483870fe48" (UID: "dbe60214-3673-4c3b-a043-ee483870fe48"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:02:21 crc kubenswrapper[4932]: I0218 20:02:21.696469 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dbe60214-3673-4c3b-a043-ee483870fe48" (UID: "dbe60214-3673-4c3b-a043-ee483870fe48"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:02:21 crc kubenswrapper[4932]: I0218 20:02:21.708221 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-inventory" (OuterVolumeSpecName: "inventory") pod "dbe60214-3673-4c3b-a043-ee483870fe48" (UID: "dbe60214-3673-4c3b-a043-ee483870fe48"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:02:21 crc kubenswrapper[4932]: I0218 20:02:21.766685 4932 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:02:21 crc kubenswrapper[4932]: I0218 20:02:21.766723 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:02:21 crc kubenswrapper[4932]: I0218 20:02:21.766737 4932 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbe60214-3673-4c3b-a043-ee483870fe48-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:02:21 crc kubenswrapper[4932]: I0218 20:02:21.766748 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkgqk\" (UniqueName: \"kubernetes.io/projected/dbe60214-3673-4c3b-a043-ee483870fe48-kube-api-access-pkgqk\") on node \"crc\" DevicePath \"\"" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.032231 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" event={"ID":"dbe60214-3673-4c3b-a043-ee483870fe48","Type":"ContainerDied","Data":"388d5a8d9075b522b3396514316338221be10e04a6b2d65c99ef9f1e91e5c2b3"} Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.032553 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="388d5a8d9075b522b3396514316338221be10e04a6b2d65c99ef9f1e91e5c2b3" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.032451 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mch52" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.131052 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf"] Feb 18 20:02:22 crc kubenswrapper[4932]: E0218 20:02:22.133391 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbe60214-3673-4c3b-a043-ee483870fe48" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.133422 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbe60214-3673-4c3b-a043-ee483870fe48" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 20:02:22 crc kubenswrapper[4932]: E0218 20:02:22.133443 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bba22f9-3b80-430c-9ef5-d8ca59db0d8a" containerName="keystone-cron" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.133452 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bba22f9-3b80-430c-9ef5-d8ca59db0d8a" containerName="keystone-cron" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.133827 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbe60214-3673-4c3b-a043-ee483870fe48" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.133845 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bba22f9-3b80-430c-9ef5-d8ca59db0d8a" containerName="keystone-cron" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.134706 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.138636 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.138955 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.138981 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.139040 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vjvmw" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.145207 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf"] Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.275497 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e460efcc-55a7-4c68-9c14-91009dee948b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf\" (UID: \"e460efcc-55a7-4c68-9c14-91009dee948b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.275587 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h7g2\" (UniqueName: \"kubernetes.io/projected/e460efcc-55a7-4c68-9c14-91009dee948b-kube-api-access-5h7g2\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf\" (UID: \"e460efcc-55a7-4c68-9c14-91009dee948b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.275724 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e460efcc-55a7-4c68-9c14-91009dee948b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf\" (UID: \"e460efcc-55a7-4c68-9c14-91009dee948b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.377397 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h7g2\" (UniqueName: \"kubernetes.io/projected/e460efcc-55a7-4c68-9c14-91009dee948b-kube-api-access-5h7g2\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf\" (UID: \"e460efcc-55a7-4c68-9c14-91009dee948b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.377541 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e460efcc-55a7-4c68-9c14-91009dee948b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf\" (UID: \"e460efcc-55a7-4c68-9c14-91009dee948b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.377661 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e460efcc-55a7-4c68-9c14-91009dee948b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf\" (UID: \"e460efcc-55a7-4c68-9c14-91009dee948b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.383815 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e460efcc-55a7-4c68-9c14-91009dee948b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf\" (UID: \"e460efcc-55a7-4c68-9c14-91009dee948b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.384427 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e460efcc-55a7-4c68-9c14-91009dee948b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf\" (UID: \"e460efcc-55a7-4c68-9c14-91009dee948b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.399919 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h7g2\" (UniqueName: \"kubernetes.io/projected/e460efcc-55a7-4c68-9c14-91009dee948b-kube-api-access-5h7g2\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf\" (UID: \"e460efcc-55a7-4c68-9c14-91009dee948b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" Feb 18 20:02:22 crc kubenswrapper[4932]: I0218 20:02:22.449810 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" Feb 18 20:02:23 crc kubenswrapper[4932]: I0218 20:02:23.005034 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf"] Feb 18 20:02:23 crc kubenswrapper[4932]: I0218 20:02:23.011152 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:02:23 crc kubenswrapper[4932]: I0218 20:02:23.050487 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" event={"ID":"e460efcc-55a7-4c68-9c14-91009dee948b","Type":"ContainerStarted","Data":"56c1cc5eef2cc64c63813c104a79ed3516a686af7cfcd28574f92635e466d803"} Feb 18 20:02:24 crc kubenswrapper[4932]: I0218 20:02:24.061461 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" event={"ID":"e460efcc-55a7-4c68-9c14-91009dee948b","Type":"ContainerStarted","Data":"30c66c47bf8249b3b27644ba8768bf427241f9daf4bcbee4fae5c4b1e9538966"} Feb 18 20:02:24 crc kubenswrapper[4932]: I0218 20:02:24.078058 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" podStartSLOduration=1.521745477 podStartE2EDuration="2.078038952s" podCreationTimestamp="2026-02-18 20:02:22 +0000 UTC" firstStartedPulling="2026-02-18 20:02:23.010934344 +0000 UTC m=+1706.592889189" lastFinishedPulling="2026-02-18 20:02:23.567227799 +0000 UTC m=+1707.149182664" observedRunningTime="2026-02-18 20:02:24.076031012 +0000 UTC m=+1707.657985857" watchObservedRunningTime="2026-02-18 20:02:24.078038952 +0000 UTC m=+1707.659993797" Feb 18 20:02:31 crc kubenswrapper[4932]: I0218 20:02:31.179943 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:02:31 crc kubenswrapper[4932]: E0218 20:02:31.180796 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:02:39 crc kubenswrapper[4932]: I0218 20:02:39.049950 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-zhvln"] Feb 18 20:02:39 crc kubenswrapper[4932]: I0218 20:02:39.060501 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-zhvln"] Feb 18 20:02:39 crc kubenswrapper[4932]: I0218 20:02:39.190234 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64352a4d-f3af-44e1-b1d7-cc5e125de560" path="/var/lib/kubelet/pods/64352a4d-f3af-44e1-b1d7-cc5e125de560/volumes" Feb 18 20:02:40 crc kubenswrapper[4932]: I0218 20:02:40.047193 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-js74w"] Feb 18 20:02:40 crc kubenswrapper[4932]: I0218 20:02:40.065557 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bd21-account-create-update-kcn9v"] Feb 18 20:02:40 crc kubenswrapper[4932]: I0218 20:02:40.081233 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-rw8qr"] Feb 18 20:02:40 crc kubenswrapper[4932]: I0218 20:02:40.090913 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-e952-account-create-update-jjrs6"] Feb 18 20:02:40 crc kubenswrapper[4932]: I0218 20:02:40.105301 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bd21-account-create-update-kcn9v"] Feb 18 20:02:40 crc kubenswrapper[4932]: I0218 20:02:40.128272 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-5833-account-create-update-fxm2t"] Feb 18 20:02:40 crc kubenswrapper[4932]: I0218 20:02:40.146424 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-e952-account-create-update-jjrs6"] Feb 18 20:02:40 crc kubenswrapper[4932]: I0218 20:02:40.165242 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-js74w"] Feb 18 20:02:40 crc kubenswrapper[4932]: I0218 20:02:40.176855 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-5833-account-create-update-fxm2t"] Feb 18 20:02:40 crc kubenswrapper[4932]: I0218 20:02:40.186187 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-rw8qr"] Feb 18 20:02:41 crc kubenswrapper[4932]: I0218 20:02:41.040352 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-vtbzd"] Feb 18 20:02:41 crc kubenswrapper[4932]: I0218 20:02:41.056397 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-734d-account-create-update-stk6x"] Feb 18 20:02:41 crc kubenswrapper[4932]: I0218 20:02:41.070507 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-vtbzd"] Feb 18 20:02:41 crc kubenswrapper[4932]: I0218 20:02:41.082827 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-734d-account-create-update-stk6x"] Feb 18 20:02:41 crc kubenswrapper[4932]: I0218 20:02:41.191594 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02bb1c31-7377-432f-8434-72981200f1ac" path="/var/lib/kubelet/pods/02bb1c31-7377-432f-8434-72981200f1ac/volumes" Feb 18 20:02:41 crc kubenswrapper[4932]: I0218 20:02:41.192685 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26bd1cb1-1dcb-460e-ba19-eb8bef1951b5" path="/var/lib/kubelet/pods/26bd1cb1-1dcb-460e-ba19-eb8bef1951b5/volumes" Feb 18 20:02:41 crc kubenswrapper[4932]: I0218 20:02:41.193719 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35590261-332c-47e0-89e9-4eef3fd36086" path="/var/lib/kubelet/pods/35590261-332c-47e0-89e9-4eef3fd36086/volumes" Feb 18 20:02:41 crc kubenswrapper[4932]: I0218 20:02:41.194833 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56349fdd-8b87-4910-b182-555b5913d5ee" path="/var/lib/kubelet/pods/56349fdd-8b87-4910-b182-555b5913d5ee/volumes" Feb 18 20:02:41 crc kubenswrapper[4932]: I0218 20:02:41.196320 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fa1fef8-5a2e-4518-8641-d4b594fc29a3" path="/var/lib/kubelet/pods/7fa1fef8-5a2e-4518-8641-d4b594fc29a3/volumes" Feb 18 20:02:41 crc kubenswrapper[4932]: I0218 20:02:41.197149 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bec590bc-e2ef-49e0-80be-27af6f69aa06" path="/var/lib/kubelet/pods/bec590bc-e2ef-49e0-80be-27af6f69aa06/volumes" Feb 18 20:02:41 crc kubenswrapper[4932]: I0218 20:02:41.198097 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4c8a6a6-4944-4c6f-be98-9dde833b89e5" path="/var/lib/kubelet/pods/c4c8a6a6-4944-4c6f-be98-9dde833b89e5/volumes" Feb 18 20:02:43 crc kubenswrapper[4932]: I0218 20:02:43.179683 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:02:43 crc kubenswrapper[4932]: E0218 20:02:43.180897 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:02:54 crc kubenswrapper[4932]: I0218 20:02:54.179811 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:02:54 crc kubenswrapper[4932]: E0218 20:02:54.182365 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:03:01 crc kubenswrapper[4932]: I0218 20:03:01.042769 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-xbdgt"] Feb 18 20:03:01 crc kubenswrapper[4932]: I0218 20:03:01.056188 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-xbdgt"] Feb 18 20:03:01 crc kubenswrapper[4932]: I0218 20:03:01.196975 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3eb4a050-ebc6-4319-b27f-9c9cce058ec1" path="/var/lib/kubelet/pods/3eb4a050-ebc6-4319-b27f-9c9cce058ec1/volumes" Feb 18 20:03:03 crc kubenswrapper[4932]: I0218 20:03:03.495297 4932 scope.go:117] "RemoveContainer" containerID="3a33312c61bc35aede7b854947f5cacef494c07faca9fd46ae2f217a195bc457" Feb 18 20:03:03 crc kubenswrapper[4932]: I0218 20:03:03.527936 4932 scope.go:117] "RemoveContainer" containerID="3ce1a237abcba8eb5dacdaaf6767d6692224b8089fbea09e0b1408de503e1b1a" Feb 18 20:03:03 crc kubenswrapper[4932]: I0218 20:03:03.586885 4932 scope.go:117] "RemoveContainer" containerID="95e00440e590eb387c9cf8e2e2f9778a04bbe9e0e014879d57139cdcea3fd2d4" Feb 18 20:03:03 crc kubenswrapper[4932]: I0218 20:03:03.632139 4932 scope.go:117] "RemoveContainer" containerID="5e6b5516d234b57d2f859d33d51d54c0aee524d02399dad696a4642cf7cceb8a" Feb 18 20:03:03 crc kubenswrapper[4932]: I0218 20:03:03.672646 4932 scope.go:117] "RemoveContainer" containerID="eeb81a13449459a4c7d2237c075a2110a61a815c3e8cc4a439843e5121373f28" Feb 18 20:03:03 crc kubenswrapper[4932]: I0218 20:03:03.718653 4932 scope.go:117] "RemoveContainer" containerID="38fb496c61ec368b9f0d3847ea90156e96e96daa825692bcb6b0867b238ef4ee" Feb 18 20:03:03 crc kubenswrapper[4932]: I0218 20:03:03.761726 4932 scope.go:117] "RemoveContainer" containerID="2cfcad461c33bcb694d12209c0cb7b72420cbc06fd09263f1f26b50ea451f974" Feb 18 20:03:03 crc kubenswrapper[4932]: I0218 20:03:03.782381 4932 scope.go:117] "RemoveContainer" containerID="3b567de8b4f1ae33989815fad19a6d8b9f69d7df099f4fd8ff235740848c1cc0" Feb 18 20:03:03 crc kubenswrapper[4932]: I0218 20:03:03.806488 4932 scope.go:117] "RemoveContainer" containerID="439c6cd70d2e38e21f55a810c1fb66ab1e1dc66541977f85b2ca4f91d6caf61b" Feb 18 20:03:06 crc kubenswrapper[4932]: I0218 20:03:06.038394 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-hvt6h"] Feb 18 20:03:06 crc kubenswrapper[4932]: I0218 20:03:06.047657 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-hvt6h"] Feb 18 20:03:06 crc kubenswrapper[4932]: I0218 20:03:06.179800 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:03:06 crc kubenswrapper[4932]: E0218 20:03:06.180361 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:03:07 crc kubenswrapper[4932]: I0218 20:03:07.036531 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-hn6qq"] Feb 18 20:03:07 crc kubenswrapper[4932]: I0218 20:03:07.050942 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-a65d-account-create-update-chx2v"] Feb 18 20:03:07 crc kubenswrapper[4932]: I0218 20:03:07.061947 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-a65d-account-create-update-chx2v"] Feb 18 20:03:07 crc kubenswrapper[4932]: I0218 20:03:07.070733 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-hn6qq"] Feb 18 20:03:07 crc kubenswrapper[4932]: I0218 20:03:07.078699 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-5bd9-account-create-update-7tv8h"] Feb 18 20:03:07 crc kubenswrapper[4932]: I0218 20:03:07.085984 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-5bd9-account-create-update-7tv8h"] Feb 18 20:03:07 crc kubenswrapper[4932]: I0218 20:03:07.191275 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56734660-55cc-463c-89f2-131bc9109dab" path="/var/lib/kubelet/pods/56734660-55cc-463c-89f2-131bc9109dab/volumes" Feb 18 20:03:07 crc kubenswrapper[4932]: I0218 20:03:07.191845 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7680bf6b-efd6-452a-8900-09cf55b203ff" path="/var/lib/kubelet/pods/7680bf6b-efd6-452a-8900-09cf55b203ff/volumes" Feb 18 20:03:07 crc kubenswrapper[4932]: I0218 20:03:07.192442 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac9c39c2-bf9e-4f11-b37f-17089fce08e7" path="/var/lib/kubelet/pods/ac9c39c2-bf9e-4f11-b37f-17089fce08e7/volumes" Feb 18 20:03:07 crc kubenswrapper[4932]: I0218 20:03:07.193028 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7988cea-6aa8-4552-8965-04b417c91831" path="/var/lib/kubelet/pods/f7988cea-6aa8-4552-8965-04b417c91831/volumes" Feb 18 20:03:12 crc kubenswrapper[4932]: I0218 20:03:12.039907 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-hbs76"] Feb 18 20:03:12 crc kubenswrapper[4932]: I0218 20:03:12.052284 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-53f4-account-create-update-mh2bq"] Feb 18 20:03:12 crc kubenswrapper[4932]: I0218 20:03:12.061187 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-53f4-account-create-update-mh2bq"] Feb 18 20:03:12 crc kubenswrapper[4932]: I0218 20:03:12.073521 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-hbs76"] Feb 18 20:03:13 crc kubenswrapper[4932]: I0218 20:03:13.192647 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b9deee6-7804-492e-88c9-147087152416" path="/var/lib/kubelet/pods/0b9deee6-7804-492e-88c9-147087152416/volumes" Feb 18 20:03:13 crc kubenswrapper[4932]: I0218 20:03:13.195044 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca3578cc-7bd4-4e77-8b29-bbb38f588260" path="/var/lib/kubelet/pods/ca3578cc-7bd4-4e77-8b29-bbb38f588260/volumes" Feb 18 20:03:17 crc kubenswrapper[4932]: I0218 20:03:17.035643 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-rl7xx"] Feb 18 20:03:17 crc kubenswrapper[4932]: I0218 20:03:17.048457 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-rl7xx"] Feb 18 20:03:17 crc kubenswrapper[4932]: I0218 20:03:17.196697 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bbf2873-6ca9-4569-b5b6-3003511c02ba" path="/var/lib/kubelet/pods/1bbf2873-6ca9-4569-b5b6-3003511c02ba/volumes" Feb 18 20:03:21 crc kubenswrapper[4932]: I0218 20:03:21.179274 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:03:21 crc kubenswrapper[4932]: E0218 20:03:21.180412 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:03:24 crc kubenswrapper[4932]: I0218 20:03:24.072477 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-4ghxf"] Feb 18 20:03:24 crc kubenswrapper[4932]: I0218 20:03:24.094444 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-4ghxf"] Feb 18 20:03:25 crc kubenswrapper[4932]: I0218 20:03:25.051024 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-h526s"] Feb 18 20:03:25 crc kubenswrapper[4932]: I0218 20:03:25.062466 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-h526s"] Feb 18 20:03:25 crc kubenswrapper[4932]: I0218 20:03:25.204027 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14c3aa11-529c-423d-bb7d-30fd0d5a3e7a" path="/var/lib/kubelet/pods/14c3aa11-529c-423d-bb7d-30fd0d5a3e7a/volumes" Feb 18 20:03:25 crc kubenswrapper[4932]: I0218 20:03:25.204668 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc05154b-7f25-4fb1-8293-9aba06523c37" path="/var/lib/kubelet/pods/bc05154b-7f25-4fb1-8293-9aba06523c37/volumes" Feb 18 20:03:34 crc kubenswrapper[4932]: I0218 20:03:34.179417 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:03:34 crc kubenswrapper[4932]: E0218 20:03:34.180410 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:03:48 crc kubenswrapper[4932]: I0218 20:03:48.179437 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:03:48 crc kubenswrapper[4932]: E0218 20:03:48.180486 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:04:01 crc kubenswrapper[4932]: I0218 20:04:01.179735 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:04:01 crc kubenswrapper[4932]: E0218 20:04:01.183231 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:04:02 crc kubenswrapper[4932]: I0218 20:04:02.312690 4932 generic.go:334] "Generic (PLEG): container finished" podID="e460efcc-55a7-4c68-9c14-91009dee948b" containerID="30c66c47bf8249b3b27644ba8768bf427241f9daf4bcbee4fae5c4b1e9538966" exitCode=0 Feb 18 20:04:02 crc kubenswrapper[4932]: I0218 20:04:02.312791 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" event={"ID":"e460efcc-55a7-4c68-9c14-91009dee948b","Type":"ContainerDied","Data":"30c66c47bf8249b3b27644ba8768bf427241f9daf4bcbee4fae5c4b1e9538966"} Feb 18 20:04:03 crc kubenswrapper[4932]: I0218 20:04:03.741756 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" Feb 18 20:04:03 crc kubenswrapper[4932]: I0218 20:04:03.880078 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e460efcc-55a7-4c68-9c14-91009dee948b-ssh-key-openstack-edpm-ipam\") pod \"e460efcc-55a7-4c68-9c14-91009dee948b\" (UID: \"e460efcc-55a7-4c68-9c14-91009dee948b\") " Feb 18 20:04:03 crc kubenswrapper[4932]: I0218 20:04:03.880222 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h7g2\" (UniqueName: \"kubernetes.io/projected/e460efcc-55a7-4c68-9c14-91009dee948b-kube-api-access-5h7g2\") pod \"e460efcc-55a7-4c68-9c14-91009dee948b\" (UID: \"e460efcc-55a7-4c68-9c14-91009dee948b\") " Feb 18 20:04:03 crc kubenswrapper[4932]: I0218 20:04:03.880437 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e460efcc-55a7-4c68-9c14-91009dee948b-inventory\") pod \"e460efcc-55a7-4c68-9c14-91009dee948b\" (UID: \"e460efcc-55a7-4c68-9c14-91009dee948b\") " Feb 18 20:04:03 crc kubenswrapper[4932]: I0218 20:04:03.886563 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e460efcc-55a7-4c68-9c14-91009dee948b-kube-api-access-5h7g2" (OuterVolumeSpecName: "kube-api-access-5h7g2") pod "e460efcc-55a7-4c68-9c14-91009dee948b" (UID: "e460efcc-55a7-4c68-9c14-91009dee948b"). InnerVolumeSpecName "kube-api-access-5h7g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:04:03 crc kubenswrapper[4932]: I0218 20:04:03.931230 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e460efcc-55a7-4c68-9c14-91009dee948b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e460efcc-55a7-4c68-9c14-91009dee948b" (UID: "e460efcc-55a7-4c68-9c14-91009dee948b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:04:03 crc kubenswrapper[4932]: I0218 20:04:03.932193 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e460efcc-55a7-4c68-9c14-91009dee948b-inventory" (OuterVolumeSpecName: "inventory") pod "e460efcc-55a7-4c68-9c14-91009dee948b" (UID: "e460efcc-55a7-4c68-9c14-91009dee948b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:04:03 crc kubenswrapper[4932]: I0218 20:04:03.978292 4932 scope.go:117] "RemoveContainer" containerID="2dcf1d051e29c868ab7c7db13dbafa7710ab23c52dd39329f8dbfbb2b5ea9459" Feb 18 20:04:03 crc kubenswrapper[4932]: I0218 20:04:03.984294 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e460efcc-55a7-4c68-9c14-91009dee948b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:04:03 crc kubenswrapper[4932]: I0218 20:04:03.984323 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5h7g2\" (UniqueName: \"kubernetes.io/projected/e460efcc-55a7-4c68-9c14-91009dee948b-kube-api-access-5h7g2\") on node \"crc\" DevicePath \"\"" Feb 18 20:04:03 crc kubenswrapper[4932]: I0218 20:04:03.984338 4932 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e460efcc-55a7-4c68-9c14-91009dee948b-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.052427 4932 scope.go:117] "RemoveContainer" containerID="48cdc7bd0a5fa5affdc3d044cfe0ccc940cdde09dc40fd9f4253e5cd4c996f16" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.074747 4932 scope.go:117] "RemoveContainer" containerID="03cc21b056f77810add58b5621bb79299b2f95efe33228e5665e27461f3e50f3" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.098085 4932 scope.go:117] "RemoveContainer" containerID="4abb236d79cc8592182059a25a1bc35aaa2d4ae1b8716c7469a32147843e50a4" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.121195 4932 scope.go:117] "RemoveContainer" containerID="979fb0febd6062fe5161812c56f74561bb0c81dc6ed2e8e26cb348d3275186d6" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.145936 4932 scope.go:117] "RemoveContainer" containerID="104353923ef97f2e6933dbfcfbfc2a9125473f1373667e2eb5163afb4316da88" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.191712 4932 scope.go:117] "RemoveContainer" containerID="2a62cc7c92b0f61fc993f04377e1428679cd22afc955b4da72b0e6e2d00eb682" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.214109 4932 scope.go:117] "RemoveContainer" containerID="e723a55a533327bae796eda64399cc0b1ee1750e65068515a7e5625e2f091ec4" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.257986 4932 scope.go:117] "RemoveContainer" containerID="0d634b73a958b2e21485770f0ca87b0cc9a8038deca230cf324c0047e0c7f89e" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.342243 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.342425 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6pnbf" event={"ID":"e460efcc-55a7-4c68-9c14-91009dee948b","Type":"ContainerDied","Data":"56c1cc5eef2cc64c63813c104a79ed3516a686af7cfcd28574f92635e466d803"} Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.342482 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56c1cc5eef2cc64c63813c104a79ed3516a686af7cfcd28574f92635e466d803" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.418511 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx"] Feb 18 20:04:04 crc kubenswrapper[4932]: E0218 20:04:04.419361 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e460efcc-55a7-4c68-9c14-91009dee948b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.419438 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="e460efcc-55a7-4c68-9c14-91009dee948b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.419707 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="e460efcc-55a7-4c68-9c14-91009dee948b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.420409 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.422861 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.423165 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vjvmw" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.423248 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.432165 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.455599 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx"] Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.493355 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5574d\" (UniqueName: \"kubernetes.io/projected/12f764db-8a47-4554-bea3-c71b6663cdec-kube-api-access-5574d\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx\" (UID: \"12f764db-8a47-4554-bea3-c71b6663cdec\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.493408 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12f764db-8a47-4554-bea3-c71b6663cdec-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx\" (UID: \"12f764db-8a47-4554-bea3-c71b6663cdec\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.493447 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12f764db-8a47-4554-bea3-c71b6663cdec-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx\" (UID: \"12f764db-8a47-4554-bea3-c71b6663cdec\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.595876 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12f764db-8a47-4554-bea3-c71b6663cdec-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx\" (UID: \"12f764db-8a47-4554-bea3-c71b6663cdec\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.595939 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12f764db-8a47-4554-bea3-c71b6663cdec-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx\" (UID: \"12f764db-8a47-4554-bea3-c71b6663cdec\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.596086 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5574d\" (UniqueName: \"kubernetes.io/projected/12f764db-8a47-4554-bea3-c71b6663cdec-kube-api-access-5574d\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx\" (UID: \"12f764db-8a47-4554-bea3-c71b6663cdec\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.600706 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12f764db-8a47-4554-bea3-c71b6663cdec-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx\" (UID: \"12f764db-8a47-4554-bea3-c71b6663cdec\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.600751 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12f764db-8a47-4554-bea3-c71b6663cdec-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx\" (UID: \"12f764db-8a47-4554-bea3-c71b6663cdec\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.622752 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5574d\" (UniqueName: \"kubernetes.io/projected/12f764db-8a47-4554-bea3-c71b6663cdec-kube-api-access-5574d\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx\" (UID: \"12f764db-8a47-4554-bea3-c71b6663cdec\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" Feb 18 20:04:04 crc kubenswrapper[4932]: I0218 20:04:04.740753 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" Feb 18 20:04:05 crc kubenswrapper[4932]: I0218 20:04:05.764065 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx"] Feb 18 20:04:06 crc kubenswrapper[4932]: I0218 20:04:06.635813 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" event={"ID":"12f764db-8a47-4554-bea3-c71b6663cdec","Type":"ContainerStarted","Data":"7b6cbe66b4880567e8856647b80588deb9823126d1f35e4825ba3e7a73a88b7a"} Feb 18 20:04:06 crc kubenswrapper[4932]: I0218 20:04:06.635877 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" event={"ID":"12f764db-8a47-4554-bea3-c71b6663cdec","Type":"ContainerStarted","Data":"1d9a351b5e32ee2106112cc0bb1d8becea3a0f04cb2a76228d9bf8ba749d8d89"} Feb 18 20:04:06 crc kubenswrapper[4932]: I0218 20:04:06.658755 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" podStartSLOduration=2.152086411 podStartE2EDuration="2.65873351s" podCreationTimestamp="2026-02-18 20:04:04 +0000 UTC" firstStartedPulling="2026-02-18 20:04:05.772405034 +0000 UTC m=+1809.354359879" lastFinishedPulling="2026-02-18 20:04:06.279052123 +0000 UTC m=+1809.861006978" observedRunningTime="2026-02-18 20:04:06.650321351 +0000 UTC m=+1810.232276206" watchObservedRunningTime="2026-02-18 20:04:06.65873351 +0000 UTC m=+1810.240688365" Feb 18 20:04:07 crc kubenswrapper[4932]: I0218 20:04:07.048547 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-vldrp"] Feb 18 20:04:07 crc kubenswrapper[4932]: I0218 20:04:07.057264 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-vldrp"] Feb 18 20:04:07 crc kubenswrapper[4932]: I0218 20:04:07.065202 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-df7zx"] Feb 18 20:04:07 crc kubenswrapper[4932]: I0218 20:04:07.072483 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-df7zx"] Feb 18 20:04:07 crc kubenswrapper[4932]: I0218 20:04:07.219963 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="300b7bcb-1caa-440a-88bc-dc2c4e3b43cd" path="/var/lib/kubelet/pods/300b7bcb-1caa-440a-88bc-dc2c4e3b43cd/volumes" Feb 18 20:04:07 crc kubenswrapper[4932]: I0218 20:04:07.220691 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30efc86e-0c26-42e4-b907-1d4d985912ed" path="/var/lib/kubelet/pods/30efc86e-0c26-42e4-b907-1d4d985912ed/volumes" Feb 18 20:04:12 crc kubenswrapper[4932]: I0218 20:04:12.180743 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:04:12 crc kubenswrapper[4932]: E0218 20:04:12.181740 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:04:19 crc kubenswrapper[4932]: I0218 20:04:19.049960 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-cpzcj"] Feb 18 20:04:19 crc kubenswrapper[4932]: I0218 20:04:19.068660 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-cpzcj"] Feb 18 20:04:19 crc kubenswrapper[4932]: I0218 20:04:19.201363 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43f771cb-173f-4939-b1d1-e7d1b21834cb" path="/var/lib/kubelet/pods/43f771cb-173f-4939-b1d1-e7d1b21834cb/volumes" Feb 18 20:04:20 crc kubenswrapper[4932]: I0218 20:04:20.046857 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-kfzmp"] Feb 18 20:04:20 crc kubenswrapper[4932]: I0218 20:04:20.062418 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-kfzmp"] Feb 18 20:04:21 crc kubenswrapper[4932]: I0218 20:04:21.037523 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-nqxxn"] Feb 18 20:04:21 crc kubenswrapper[4932]: I0218 20:04:21.045888 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-nqxxn"] Feb 18 20:04:21 crc kubenswrapper[4932]: I0218 20:04:21.194043 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f831817-b833-4ee3-b1e9-77d9c02416ed" path="/var/lib/kubelet/pods/3f831817-b833-4ee3-b1e9-77d9c02416ed/volumes" Feb 18 20:04:21 crc kubenswrapper[4932]: I0218 20:04:21.195259 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4c20fc2-cf78-41c9-9e37-c5bea35d472f" path="/var/lib/kubelet/pods/c4c20fc2-cf78-41c9-9e37-c5bea35d472f/volumes" Feb 18 20:04:26 crc kubenswrapper[4932]: I0218 20:04:26.180282 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:04:26 crc kubenswrapper[4932]: E0218 20:04:26.181449 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:04:38 crc kubenswrapper[4932]: I0218 20:04:38.179030 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:04:38 crc kubenswrapper[4932]: E0218 20:04:38.179823 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:04:49 crc kubenswrapper[4932]: I0218 20:04:49.179962 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:04:49 crc kubenswrapper[4932]: E0218 20:04:49.180960 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:04:52 crc kubenswrapper[4932]: I0218 20:04:52.043390 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-a786-account-create-update-jrb5b"] Feb 18 20:04:52 crc kubenswrapper[4932]: I0218 20:04:52.059596 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-a786-account-create-update-jrb5b"] Feb 18 20:04:53 crc kubenswrapper[4932]: I0218 20:04:53.076270 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-2fd4-account-create-update-s9r68"] Feb 18 20:04:53 crc kubenswrapper[4932]: I0218 20:04:53.085899 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-qlt9g"] Feb 18 20:04:53 crc kubenswrapper[4932]: I0218 20:04:53.095325 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-zxht6"] Feb 18 20:04:53 crc kubenswrapper[4932]: I0218 20:04:53.104708 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-2fd4-account-create-update-s9r68"] Feb 18 20:04:53 crc kubenswrapper[4932]: I0218 20:04:53.117109 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-zxht6"] Feb 18 20:04:53 crc kubenswrapper[4932]: I0218 20:04:53.128645 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-qlt9g"] Feb 18 20:04:53 crc kubenswrapper[4932]: I0218 20:04:53.195469 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20264fab-dfb6-4e8c-90c3-755f6877b798" path="/var/lib/kubelet/pods/20264fab-dfb6-4e8c-90c3-755f6877b798/volumes" Feb 18 20:04:53 crc kubenswrapper[4932]: I0218 20:04:53.196543 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6ae5264-a3f4-4f05-b7ff-942b182ee6e6" path="/var/lib/kubelet/pods/a6ae5264-a3f4-4f05-b7ff-942b182ee6e6/volumes" Feb 18 20:04:53 crc kubenswrapper[4932]: I0218 20:04:53.197522 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aec70d32-3fdc-410f-9d9d-9b108e079cfe" path="/var/lib/kubelet/pods/aec70d32-3fdc-410f-9d9d-9b108e079cfe/volumes" Feb 18 20:04:53 crc kubenswrapper[4932]: I0218 20:04:53.198368 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccc8867f-cb56-47ad-9d08-a25feca678fc" path="/var/lib/kubelet/pods/ccc8867f-cb56-47ad-9d08-a25feca678fc/volumes" Feb 18 20:04:54 crc kubenswrapper[4932]: I0218 20:04:54.029739 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-5405-account-create-update-8fjff"] Feb 18 20:04:54 crc kubenswrapper[4932]: I0218 20:04:54.041207 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-xdsn5"] Feb 18 20:04:54 crc kubenswrapper[4932]: I0218 20:04:54.054093 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-5405-account-create-update-8fjff"] Feb 18 20:04:54 crc kubenswrapper[4932]: I0218 20:04:54.066374 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-xdsn5"] Feb 18 20:04:55 crc kubenswrapper[4932]: I0218 20:04:55.203046 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7703d71c-4ee9-4495-ab74-0a76c148d377" path="/var/lib/kubelet/pods/7703d71c-4ee9-4495-ab74-0a76c148d377/volumes" Feb 18 20:04:55 crc kubenswrapper[4932]: I0218 20:04:55.205646 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b44b5c9c-2c44-4e46-a14f-a8a0c93781d3" path="/var/lib/kubelet/pods/b44b5c9c-2c44-4e46-a14f-a8a0c93781d3/volumes" Feb 18 20:05:03 crc kubenswrapper[4932]: I0218 20:05:03.180139 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:05:04 crc kubenswrapper[4932]: I0218 20:05:04.268593 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"93b2aadde96a1cb53f394f160a8c65ff537540cf335aacf73c90625c7fb96dd4"} Feb 18 20:05:04 crc kubenswrapper[4932]: I0218 20:05:04.486163 4932 scope.go:117] "RemoveContainer" containerID="35671852602ab05670d4f45f3855e4d52f08702c9d127db3894e27656cb622ec" Feb 18 20:05:04 crc kubenswrapper[4932]: I0218 20:05:04.528458 4932 scope.go:117] "RemoveContainer" containerID="a9bd3203306587d945952a2d8b8a38aa992a6b26567d9b7e7b075edf3005412d" Feb 18 20:05:04 crc kubenswrapper[4932]: I0218 20:05:04.568622 4932 scope.go:117] "RemoveContainer" containerID="561bed36cff9fe4632c1003655b4ef598d4e8ea47f27f52a6c7b3f87e135ec7f" Feb 18 20:05:04 crc kubenswrapper[4932]: I0218 20:05:04.632735 4932 scope.go:117] "RemoveContainer" containerID="80213ebfed248f23a59e2cc3d7242b684303a348ef8453068ab05718b9f4df29" Feb 18 20:05:04 crc kubenswrapper[4932]: I0218 20:05:04.682275 4932 scope.go:117] "RemoveContainer" containerID="e02396c72df7f91c2b9a6adb3ff52d02133d145e009ed0755b0356a1da74ee73" Feb 18 20:05:04 crc kubenswrapper[4932]: I0218 20:05:04.728313 4932 scope.go:117] "RemoveContainer" containerID="49092cff964806110781a1ce6f40a2126d58bcb45c2544f984759951802714c3" Feb 18 20:05:04 crc kubenswrapper[4932]: I0218 20:05:04.763471 4932 scope.go:117] "RemoveContainer" containerID="d60abba7265ba14494902810d1153e145d30148ef253f739d8bb7a9a9675f1f8" Feb 18 20:05:04 crc kubenswrapper[4932]: I0218 20:05:04.791226 4932 scope.go:117] "RemoveContainer" containerID="5ccb855943775d6e9adaf49444e172677634a8b560d436edfff1c39a86a31e48" Feb 18 20:05:04 crc kubenswrapper[4932]: I0218 20:05:04.817288 4932 scope.go:117] "RemoveContainer" containerID="2eac601de5fc1220879b1962da46431b85d3f67bca44ebc6031ccc59809d3f58" Feb 18 20:05:04 crc kubenswrapper[4932]: I0218 20:05:04.841680 4932 scope.go:117] "RemoveContainer" containerID="682f69e31fcb10c9b585e4fbecb1e2d4f8e82e3ec0c03204e9e0fefc1d901753" Feb 18 20:05:04 crc kubenswrapper[4932]: I0218 20:05:04.876986 4932 scope.go:117] "RemoveContainer" containerID="708bd68c17f2cb8bb6aefdb45fc9ab2a2b088e8be75ba3d7c52b1b8b365c0f1f" Feb 18 20:05:19 crc kubenswrapper[4932]: I0218 20:05:19.428793 4932 generic.go:334] "Generic (PLEG): container finished" podID="12f764db-8a47-4554-bea3-c71b6663cdec" containerID="7b6cbe66b4880567e8856647b80588deb9823126d1f35e4825ba3e7a73a88b7a" exitCode=0 Feb 18 20:05:19 crc kubenswrapper[4932]: I0218 20:05:19.428905 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" event={"ID":"12f764db-8a47-4554-bea3-c71b6663cdec","Type":"ContainerDied","Data":"7b6cbe66b4880567e8856647b80588deb9823126d1f35e4825ba3e7a73a88b7a"} Feb 18 20:05:20 crc kubenswrapper[4932]: I0218 20:05:20.880604 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.060050 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12f764db-8a47-4554-bea3-c71b6663cdec-inventory\") pod \"12f764db-8a47-4554-bea3-c71b6663cdec\" (UID: \"12f764db-8a47-4554-bea3-c71b6663cdec\") " Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.060167 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12f764db-8a47-4554-bea3-c71b6663cdec-ssh-key-openstack-edpm-ipam\") pod \"12f764db-8a47-4554-bea3-c71b6663cdec\" (UID: \"12f764db-8a47-4554-bea3-c71b6663cdec\") " Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.060515 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5574d\" (UniqueName: \"kubernetes.io/projected/12f764db-8a47-4554-bea3-c71b6663cdec-kube-api-access-5574d\") pod \"12f764db-8a47-4554-bea3-c71b6663cdec\" (UID: \"12f764db-8a47-4554-bea3-c71b6663cdec\") " Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.068848 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12f764db-8a47-4554-bea3-c71b6663cdec-kube-api-access-5574d" (OuterVolumeSpecName: "kube-api-access-5574d") pod "12f764db-8a47-4554-bea3-c71b6663cdec" (UID: "12f764db-8a47-4554-bea3-c71b6663cdec"). InnerVolumeSpecName "kube-api-access-5574d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.090371 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12f764db-8a47-4554-bea3-c71b6663cdec-inventory" (OuterVolumeSpecName: "inventory") pod "12f764db-8a47-4554-bea3-c71b6663cdec" (UID: "12f764db-8a47-4554-bea3-c71b6663cdec"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.091142 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12f764db-8a47-4554-bea3-c71b6663cdec-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "12f764db-8a47-4554-bea3-c71b6663cdec" (UID: "12f764db-8a47-4554-bea3-c71b6663cdec"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.162792 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5574d\" (UniqueName: \"kubernetes.io/projected/12f764db-8a47-4554-bea3-c71b6663cdec-kube-api-access-5574d\") on node \"crc\" DevicePath \"\"" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.162827 4932 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12f764db-8a47-4554-bea3-c71b6663cdec-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.162837 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12f764db-8a47-4554-bea3-c71b6663cdec-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.447258 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" event={"ID":"12f764db-8a47-4554-bea3-c71b6663cdec","Type":"ContainerDied","Data":"1d9a351b5e32ee2106112cc0bb1d8becea3a0f04cb2a76228d9bf8ba749d8d89"} Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.447300 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d9a351b5e32ee2106112cc0bb1d8becea3a0f04cb2a76228d9bf8ba749d8d89" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.447299 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q8vzx" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.526752 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq"] Feb 18 20:05:21 crc kubenswrapper[4932]: E0218 20:05:21.527398 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12f764db-8a47-4554-bea3-c71b6663cdec" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.527421 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="12f764db-8a47-4554-bea3-c71b6663cdec" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.527643 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="12f764db-8a47-4554-bea3-c71b6663cdec" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.528400 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.531295 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.531351 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.531407 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.531617 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vjvmw" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.535639 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq"] Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.673197 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq9bk\" (UniqueName: \"kubernetes.io/projected/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-kube-api-access-pq9bk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq\" (UID: \"dd9738d8-59a2-4c1a-b9af-58d1f7efd947\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.673305 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq\" (UID: \"dd9738d8-59a2-4c1a-b9af-58d1f7efd947\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.673547 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq\" (UID: \"dd9738d8-59a2-4c1a-b9af-58d1f7efd947\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.775855 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq\" (UID: \"dd9738d8-59a2-4c1a-b9af-58d1f7efd947\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.776509 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq9bk\" (UniqueName: \"kubernetes.io/projected/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-kube-api-access-pq9bk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq\" (UID: \"dd9738d8-59a2-4c1a-b9af-58d1f7efd947\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.776584 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq\" (UID: \"dd9738d8-59a2-4c1a-b9af-58d1f7efd947\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.782426 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq\" (UID: \"dd9738d8-59a2-4c1a-b9af-58d1f7efd947\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.782441 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq\" (UID: \"dd9738d8-59a2-4c1a-b9af-58d1f7efd947\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.805119 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq9bk\" (UniqueName: \"kubernetes.io/projected/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-kube-api-access-pq9bk\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq\" (UID: \"dd9738d8-59a2-4c1a-b9af-58d1f7efd947\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" Feb 18 20:05:21 crc kubenswrapper[4932]: I0218 20:05:21.853712 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" Feb 18 20:05:22 crc kubenswrapper[4932]: I0218 20:05:22.406603 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq"] Feb 18 20:05:22 crc kubenswrapper[4932]: I0218 20:05:22.461291 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" event={"ID":"dd9738d8-59a2-4c1a-b9af-58d1f7efd947","Type":"ContainerStarted","Data":"1813233506b02ea3afb1cbb753a84e217bbaf1d4c7c50db6b62f9230f4b4f44a"} Feb 18 20:05:23 crc kubenswrapper[4932]: I0218 20:05:23.472687 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" event={"ID":"dd9738d8-59a2-4c1a-b9af-58d1f7efd947","Type":"ContainerStarted","Data":"dacc67acdb626da274e85e8df5c7e1e59e99d2547879ff21d240d39c66eeabdd"} Feb 18 20:05:23 crc kubenswrapper[4932]: I0218 20:05:23.503145 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" podStartSLOduration=2.020672416 podStartE2EDuration="2.503076505s" podCreationTimestamp="2026-02-18 20:05:21 +0000 UTC" firstStartedPulling="2026-02-18 20:05:22.392252212 +0000 UTC m=+1885.974207077" lastFinishedPulling="2026-02-18 20:05:22.874656291 +0000 UTC m=+1886.456611166" observedRunningTime="2026-02-18 20:05:23.491855367 +0000 UTC m=+1887.073810232" watchObservedRunningTime="2026-02-18 20:05:23.503076505 +0000 UTC m=+1887.085031360" Feb 18 20:05:28 crc kubenswrapper[4932]: I0218 20:05:28.526953 4932 generic.go:334] "Generic (PLEG): container finished" podID="dd9738d8-59a2-4c1a-b9af-58d1f7efd947" containerID="dacc67acdb626da274e85e8df5c7e1e59e99d2547879ff21d240d39c66eeabdd" exitCode=0 Feb 18 20:05:28 crc kubenswrapper[4932]: I0218 20:05:28.527025 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" event={"ID":"dd9738d8-59a2-4c1a-b9af-58d1f7efd947","Type":"ContainerDied","Data":"dacc67acdb626da274e85e8df5c7e1e59e99d2547879ff21d240d39c66eeabdd"} Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.033003 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.056700 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-inventory\") pod \"dd9738d8-59a2-4c1a-b9af-58d1f7efd947\" (UID: \"dd9738d8-59a2-4c1a-b9af-58d1f7efd947\") " Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.056894 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-ssh-key-openstack-edpm-ipam\") pod \"dd9738d8-59a2-4c1a-b9af-58d1f7efd947\" (UID: \"dd9738d8-59a2-4c1a-b9af-58d1f7efd947\") " Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.056921 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq9bk\" (UniqueName: \"kubernetes.io/projected/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-kube-api-access-pq9bk\") pod \"dd9738d8-59a2-4c1a-b9af-58d1f7efd947\" (UID: \"dd9738d8-59a2-4c1a-b9af-58d1f7efd947\") " Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.065525 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-kube-api-access-pq9bk" (OuterVolumeSpecName: "kube-api-access-pq9bk") pod "dd9738d8-59a2-4c1a-b9af-58d1f7efd947" (UID: "dd9738d8-59a2-4c1a-b9af-58d1f7efd947"). InnerVolumeSpecName "kube-api-access-pq9bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.095326 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-inventory" (OuterVolumeSpecName: "inventory") pod "dd9738d8-59a2-4c1a-b9af-58d1f7efd947" (UID: "dd9738d8-59a2-4c1a-b9af-58d1f7efd947"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.095378 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dd9738d8-59a2-4c1a-b9af-58d1f7efd947" (UID: "dd9738d8-59a2-4c1a-b9af-58d1f7efd947"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.158825 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.158863 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pq9bk\" (UniqueName: \"kubernetes.io/projected/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-kube-api-access-pq9bk\") on node \"crc\" DevicePath \"\"" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.158874 4932 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd9738d8-59a2-4c1a-b9af-58d1f7efd947-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.544906 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" event={"ID":"dd9738d8-59a2-4c1a-b9af-58d1f7efd947","Type":"ContainerDied","Data":"1813233506b02ea3afb1cbb753a84e217bbaf1d4c7c50db6b62f9230f4b4f44a"} Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.544942 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1813233506b02ea3afb1cbb753a84e217bbaf1d4c7c50db6b62f9230f4b4f44a" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.544984 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6kgtq" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.621419 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml"] Feb 18 20:05:30 crc kubenswrapper[4932]: E0218 20:05:30.621798 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd9738d8-59a2-4c1a-b9af-58d1f7efd947" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.621816 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd9738d8-59a2-4c1a-b9af-58d1f7efd947" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.622030 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd9738d8-59a2-4c1a-b9af-58d1f7efd947" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.622680 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.624616 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.625044 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.625261 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.625951 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vjvmw" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.634897 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml"] Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.668649 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3390076-ebf5-4856-9646-e7f82a4b5f28-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-v6jml\" (UID: \"a3390076-ebf5-4856-9646-e7f82a4b5f28\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.668750 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a3390076-ebf5-4856-9646-e7f82a4b5f28-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-v6jml\" (UID: \"a3390076-ebf5-4856-9646-e7f82a4b5f28\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.668772 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjsrp\" (UniqueName: \"kubernetes.io/projected/a3390076-ebf5-4856-9646-e7f82a4b5f28-kube-api-access-fjsrp\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-v6jml\" (UID: \"a3390076-ebf5-4856-9646-e7f82a4b5f28\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.770976 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3390076-ebf5-4856-9646-e7f82a4b5f28-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-v6jml\" (UID: \"a3390076-ebf5-4856-9646-e7f82a4b5f28\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.771045 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a3390076-ebf5-4856-9646-e7f82a4b5f28-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-v6jml\" (UID: \"a3390076-ebf5-4856-9646-e7f82a4b5f28\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.771079 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjsrp\" (UniqueName: \"kubernetes.io/projected/a3390076-ebf5-4856-9646-e7f82a4b5f28-kube-api-access-fjsrp\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-v6jml\" (UID: \"a3390076-ebf5-4856-9646-e7f82a4b5f28\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.775593 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a3390076-ebf5-4856-9646-e7f82a4b5f28-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-v6jml\" (UID: \"a3390076-ebf5-4856-9646-e7f82a4b5f28\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.775999 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3390076-ebf5-4856-9646-e7f82a4b5f28-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-v6jml\" (UID: \"a3390076-ebf5-4856-9646-e7f82a4b5f28\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.799608 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjsrp\" (UniqueName: \"kubernetes.io/projected/a3390076-ebf5-4856-9646-e7f82a4b5f28-kube-api-access-fjsrp\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-v6jml\" (UID: \"a3390076-ebf5-4856-9646-e7f82a4b5f28\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" Feb 18 20:05:30 crc kubenswrapper[4932]: I0218 20:05:30.975945 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" Feb 18 20:05:31 crc kubenswrapper[4932]: I0218 20:05:31.525165 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml"] Feb 18 20:05:31 crc kubenswrapper[4932]: I0218 20:05:31.576057 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" event={"ID":"a3390076-ebf5-4856-9646-e7f82a4b5f28","Type":"ContainerStarted","Data":"d9e23bb9221f30e4c904e27e398042c535ab54fd9e12565fb3207e201a576c68"} Feb 18 20:05:32 crc kubenswrapper[4932]: I0218 20:05:32.591090 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" event={"ID":"a3390076-ebf5-4856-9646-e7f82a4b5f28","Type":"ContainerStarted","Data":"50bc0af33067e1ceb9abed0771cb3bdb1fe3c9bb9acc856016812709bcf5281d"} Feb 18 20:05:32 crc kubenswrapper[4932]: I0218 20:05:32.618390 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" podStartSLOduration=2.119119346 podStartE2EDuration="2.618372692s" podCreationTimestamp="2026-02-18 20:05:30 +0000 UTC" firstStartedPulling="2026-02-18 20:05:31.539798957 +0000 UTC m=+1895.121753802" lastFinishedPulling="2026-02-18 20:05:32.039052293 +0000 UTC m=+1895.621007148" observedRunningTime="2026-02-18 20:05:32.608560989 +0000 UTC m=+1896.190515854" watchObservedRunningTime="2026-02-18 20:05:32.618372692 +0000 UTC m=+1896.200327537" Feb 18 20:05:38 crc kubenswrapper[4932]: I0218 20:05:38.068395 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-64b8m"] Feb 18 20:05:38 crc kubenswrapper[4932]: I0218 20:05:38.082607 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-64b8m"] Feb 18 20:05:39 crc kubenswrapper[4932]: I0218 20:05:39.193232 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c88334ec-64f6-41ba-aee5-d5323e8c0c25" path="/var/lib/kubelet/pods/c88334ec-64f6-41ba-aee5-d5323e8c0c25/volumes" Feb 18 20:06:03 crc kubenswrapper[4932]: I0218 20:06:03.054252 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-xlzdb"] Feb 18 20:06:03 crc kubenswrapper[4932]: I0218 20:06:03.079573 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-xlzdb"] Feb 18 20:06:03 crc kubenswrapper[4932]: I0218 20:06:03.189478 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6473c7ac-af7d-4556-aa86-28aabc85694a" path="/var/lib/kubelet/pods/6473c7ac-af7d-4556-aa86-28aabc85694a/volumes" Feb 18 20:06:05 crc kubenswrapper[4932]: I0218 20:06:05.071689 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-f756w"] Feb 18 20:06:05 crc kubenswrapper[4932]: I0218 20:06:05.087504 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-f756w"] Feb 18 20:06:05 crc kubenswrapper[4932]: I0218 20:06:05.166120 4932 scope.go:117] "RemoveContainer" containerID="4e7866a2ddd0a42f76d440fa6b1c16f63d3f4f13968f3f538f0dc810522b826b" Feb 18 20:06:05 crc kubenswrapper[4932]: I0218 20:06:05.189508 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d3a07cf-a084-46a0-8ca2-830e0838d575" path="/var/lib/kubelet/pods/5d3a07cf-a084-46a0-8ca2-830e0838d575/volumes" Feb 18 20:06:05 crc kubenswrapper[4932]: I0218 20:06:05.209372 4932 scope.go:117] "RemoveContainer" containerID="7ff7e9bf05a2ba3237ddc130003a316b61a512ddd8b5c858384cd739b41a1cfd" Feb 18 20:06:09 crc kubenswrapper[4932]: I0218 20:06:09.969090 4932 generic.go:334] "Generic (PLEG): container finished" podID="a3390076-ebf5-4856-9646-e7f82a4b5f28" containerID="50bc0af33067e1ceb9abed0771cb3bdb1fe3c9bb9acc856016812709bcf5281d" exitCode=0 Feb 18 20:06:09 crc kubenswrapper[4932]: I0218 20:06:09.969231 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" event={"ID":"a3390076-ebf5-4856-9646-e7f82a4b5f28","Type":"ContainerDied","Data":"50bc0af33067e1ceb9abed0771cb3bdb1fe3c9bb9acc856016812709bcf5281d"} Feb 18 20:06:11 crc kubenswrapper[4932]: I0218 20:06:11.527356 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" Feb 18 20:06:11 crc kubenswrapper[4932]: I0218 20:06:11.652137 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3390076-ebf5-4856-9646-e7f82a4b5f28-inventory\") pod \"a3390076-ebf5-4856-9646-e7f82a4b5f28\" (UID: \"a3390076-ebf5-4856-9646-e7f82a4b5f28\") " Feb 18 20:06:11 crc kubenswrapper[4932]: I0218 20:06:11.652355 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjsrp\" (UniqueName: \"kubernetes.io/projected/a3390076-ebf5-4856-9646-e7f82a4b5f28-kube-api-access-fjsrp\") pod \"a3390076-ebf5-4856-9646-e7f82a4b5f28\" (UID: \"a3390076-ebf5-4856-9646-e7f82a4b5f28\") " Feb 18 20:06:11 crc kubenswrapper[4932]: I0218 20:06:11.652490 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a3390076-ebf5-4856-9646-e7f82a4b5f28-ssh-key-openstack-edpm-ipam\") pod \"a3390076-ebf5-4856-9646-e7f82a4b5f28\" (UID: \"a3390076-ebf5-4856-9646-e7f82a4b5f28\") " Feb 18 20:06:11 crc kubenswrapper[4932]: I0218 20:06:11.657657 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3390076-ebf5-4856-9646-e7f82a4b5f28-kube-api-access-fjsrp" (OuterVolumeSpecName: "kube-api-access-fjsrp") pod "a3390076-ebf5-4856-9646-e7f82a4b5f28" (UID: "a3390076-ebf5-4856-9646-e7f82a4b5f28"). InnerVolumeSpecName "kube-api-access-fjsrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:06:11 crc kubenswrapper[4932]: I0218 20:06:11.689691 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3390076-ebf5-4856-9646-e7f82a4b5f28-inventory" (OuterVolumeSpecName: "inventory") pod "a3390076-ebf5-4856-9646-e7f82a4b5f28" (UID: "a3390076-ebf5-4856-9646-e7f82a4b5f28"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:06:11 crc kubenswrapper[4932]: I0218 20:06:11.700357 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3390076-ebf5-4856-9646-e7f82a4b5f28-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a3390076-ebf5-4856-9646-e7f82a4b5f28" (UID: "a3390076-ebf5-4856-9646-e7f82a4b5f28"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:06:11 crc kubenswrapper[4932]: I0218 20:06:11.754329 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjsrp\" (UniqueName: \"kubernetes.io/projected/a3390076-ebf5-4856-9646-e7f82a4b5f28-kube-api-access-fjsrp\") on node \"crc\" DevicePath \"\"" Feb 18 20:06:11 crc kubenswrapper[4932]: I0218 20:06:11.754370 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a3390076-ebf5-4856-9646-e7f82a4b5f28-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:06:11 crc kubenswrapper[4932]: I0218 20:06:11.754388 4932 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3390076-ebf5-4856-9646-e7f82a4b5f28-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:06:11 crc kubenswrapper[4932]: I0218 20:06:11.993920 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" event={"ID":"a3390076-ebf5-4856-9646-e7f82a4b5f28","Type":"ContainerDied","Data":"d9e23bb9221f30e4c904e27e398042c535ab54fd9e12565fb3207e201a576c68"} Feb 18 20:06:11 crc kubenswrapper[4932]: I0218 20:06:11.993960 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9e23bb9221f30e4c904e27e398042c535ab54fd9e12565fb3207e201a576c68" Feb 18 20:06:11 crc kubenswrapper[4932]: I0218 20:06:11.994424 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v6jml" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.134155 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp"] Feb 18 20:06:12 crc kubenswrapper[4932]: E0218 20:06:12.134676 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3390076-ebf5-4856-9646-e7f82a4b5f28" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.134693 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3390076-ebf5-4856-9646-e7f82a4b5f28" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.134872 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3390076-ebf5-4856-9646-e7f82a4b5f28" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.135545 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.138237 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.138490 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.138716 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.138943 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vjvmw" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.152357 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp"] Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.263894 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp\" (UID: \"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.264267 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp\" (UID: \"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.264326 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8vst\" (UniqueName: \"kubernetes.io/projected/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-kube-api-access-b8vst\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp\" (UID: \"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.365771 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp\" (UID: \"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.365896 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp\" (UID: \"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.365949 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8vst\" (UniqueName: \"kubernetes.io/projected/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-kube-api-access-b8vst\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp\" (UID: \"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.370762 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp\" (UID: \"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.371079 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp\" (UID: \"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.382783 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8vst\" (UniqueName: \"kubernetes.io/projected/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-kube-api-access-b8vst\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp\" (UID: \"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" Feb 18 20:06:12 crc kubenswrapper[4932]: I0218 20:06:12.492944 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" Feb 18 20:06:13 crc kubenswrapper[4932]: I0218 20:06:13.028347 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp"] Feb 18 20:06:14 crc kubenswrapper[4932]: I0218 20:06:14.024569 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" event={"ID":"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab","Type":"ContainerStarted","Data":"d5dcd19c405c09a09a5c129ab068a2aad22357b5c4e4a4481c35f8e392967a5c"} Feb 18 20:06:14 crc kubenswrapper[4932]: I0218 20:06:14.024962 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" event={"ID":"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab","Type":"ContainerStarted","Data":"93beb98692f78ee18e33929ce67ee50501cc2c2ec35b52dc31e5b3ff5a85e1d7"} Feb 18 20:06:14 crc kubenswrapper[4932]: I0218 20:06:14.050803 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" podStartSLOduration=1.647470701 podStartE2EDuration="2.050786403s" podCreationTimestamp="2026-02-18 20:06:12 +0000 UTC" firstStartedPulling="2026-02-18 20:06:13.03491345 +0000 UTC m=+1936.616868295" lastFinishedPulling="2026-02-18 20:06:13.438229152 +0000 UTC m=+1937.020183997" observedRunningTime="2026-02-18 20:06:14.045743728 +0000 UTC m=+1937.627698613" watchObservedRunningTime="2026-02-18 20:06:14.050786403 +0000 UTC m=+1937.632741248" Feb 18 20:06:47 crc kubenswrapper[4932]: I0218 20:06:47.056892 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-kf5w6"] Feb 18 20:06:47 crc kubenswrapper[4932]: I0218 20:06:47.071390 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-kf5w6"] Feb 18 20:06:47 crc kubenswrapper[4932]: I0218 20:06:47.193986 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="738744b3-86e1-432c-8380-0d428a2e8263" path="/var/lib/kubelet/pods/738744b3-86e1-432c-8380-0d428a2e8263/volumes" Feb 18 20:07:02 crc kubenswrapper[4932]: I0218 20:07:02.648690 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hp6v9"] Feb 18 20:07:02 crc kubenswrapper[4932]: I0218 20:07:02.652474 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hp6v9" Feb 18 20:07:02 crc kubenswrapper[4932]: I0218 20:07:02.662100 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hp6v9"] Feb 18 20:07:02 crc kubenswrapper[4932]: I0218 20:07:02.776659 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sccvp\" (UniqueName: \"kubernetes.io/projected/9385117b-aef4-4fc9-9633-c237337beea2-kube-api-access-sccvp\") pod \"certified-operators-hp6v9\" (UID: \"9385117b-aef4-4fc9-9633-c237337beea2\") " pod="openshift-marketplace/certified-operators-hp6v9" Feb 18 20:07:02 crc kubenswrapper[4932]: I0218 20:07:02.777050 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9385117b-aef4-4fc9-9633-c237337beea2-catalog-content\") pod \"certified-operators-hp6v9\" (UID: \"9385117b-aef4-4fc9-9633-c237337beea2\") " pod="openshift-marketplace/certified-operators-hp6v9" Feb 18 20:07:02 crc kubenswrapper[4932]: I0218 20:07:02.777110 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9385117b-aef4-4fc9-9633-c237337beea2-utilities\") pod \"certified-operators-hp6v9\" (UID: \"9385117b-aef4-4fc9-9633-c237337beea2\") " pod="openshift-marketplace/certified-operators-hp6v9" Feb 18 20:07:02 crc kubenswrapper[4932]: I0218 20:07:02.880069 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sccvp\" (UniqueName: \"kubernetes.io/projected/9385117b-aef4-4fc9-9633-c237337beea2-kube-api-access-sccvp\") pod \"certified-operators-hp6v9\" (UID: \"9385117b-aef4-4fc9-9633-c237337beea2\") " pod="openshift-marketplace/certified-operators-hp6v9" Feb 18 20:07:02 crc kubenswrapper[4932]: I0218 20:07:02.880220 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9385117b-aef4-4fc9-9633-c237337beea2-catalog-content\") pod \"certified-operators-hp6v9\" (UID: \"9385117b-aef4-4fc9-9633-c237337beea2\") " pod="openshift-marketplace/certified-operators-hp6v9" Feb 18 20:07:02 crc kubenswrapper[4932]: I0218 20:07:02.880552 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9385117b-aef4-4fc9-9633-c237337beea2-utilities\") pod \"certified-operators-hp6v9\" (UID: \"9385117b-aef4-4fc9-9633-c237337beea2\") " pod="openshift-marketplace/certified-operators-hp6v9" Feb 18 20:07:02 crc kubenswrapper[4932]: I0218 20:07:02.880745 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9385117b-aef4-4fc9-9633-c237337beea2-catalog-content\") pod \"certified-operators-hp6v9\" (UID: \"9385117b-aef4-4fc9-9633-c237337beea2\") " pod="openshift-marketplace/certified-operators-hp6v9" Feb 18 20:07:02 crc kubenswrapper[4932]: I0218 20:07:02.881268 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9385117b-aef4-4fc9-9633-c237337beea2-utilities\") pod \"certified-operators-hp6v9\" (UID: \"9385117b-aef4-4fc9-9633-c237337beea2\") " pod="openshift-marketplace/certified-operators-hp6v9" Feb 18 20:07:02 crc kubenswrapper[4932]: I0218 20:07:02.911147 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sccvp\" (UniqueName: \"kubernetes.io/projected/9385117b-aef4-4fc9-9633-c237337beea2-kube-api-access-sccvp\") pod \"certified-operators-hp6v9\" (UID: \"9385117b-aef4-4fc9-9633-c237337beea2\") " pod="openshift-marketplace/certified-operators-hp6v9" Feb 18 20:07:02 crc kubenswrapper[4932]: I0218 20:07:02.989484 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hp6v9" Feb 18 20:07:03 crc kubenswrapper[4932]: I0218 20:07:03.576211 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hp6v9"] Feb 18 20:07:04 crc kubenswrapper[4932]: I0218 20:07:04.533623 4932 generic.go:334] "Generic (PLEG): container finished" podID="9385117b-aef4-4fc9-9633-c237337beea2" containerID="2de7d5cc8c9f7a412555c008a15c44dcfc3a3bb05057317508b1ece2c52103da" exitCode=0 Feb 18 20:07:04 crc kubenswrapper[4932]: I0218 20:07:04.533704 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hp6v9" event={"ID":"9385117b-aef4-4fc9-9633-c237337beea2","Type":"ContainerDied","Data":"2de7d5cc8c9f7a412555c008a15c44dcfc3a3bb05057317508b1ece2c52103da"} Feb 18 20:07:04 crc kubenswrapper[4932]: I0218 20:07:04.534125 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hp6v9" event={"ID":"9385117b-aef4-4fc9-9633-c237337beea2","Type":"ContainerStarted","Data":"6ebce87b6b61ea80c41ffd1203cdea867546f05926dea452da2ed3b5a10dd57d"} Feb 18 20:07:04 crc kubenswrapper[4932]: I0218 20:07:04.537304 4932 generic.go:334] "Generic (PLEG): container finished" podID="6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab" containerID="d5dcd19c405c09a09a5c129ab068a2aad22357b5c4e4a4481c35f8e392967a5c" exitCode=0 Feb 18 20:07:04 crc kubenswrapper[4932]: I0218 20:07:04.537359 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" event={"ID":"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab","Type":"ContainerDied","Data":"d5dcd19c405c09a09a5c129ab068a2aad22357b5c4e4a4481c35f8e392967a5c"} Feb 18 20:07:05 crc kubenswrapper[4932]: I0218 20:07:05.543986 4932 scope.go:117] "RemoveContainer" containerID="e000a4553afc7ad7dbb58680bc4724da86a258372aee2e0c10f7e863173c5a10" Feb 18 20:07:05 crc kubenswrapper[4932]: I0218 20:07:05.547771 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hp6v9" event={"ID":"9385117b-aef4-4fc9-9633-c237337beea2","Type":"ContainerStarted","Data":"11cc9253b8f8d4c0220f8c047b3abd5b613257b57e4fea0bbad60acfe7ffa9a2"} Feb 18 20:07:05 crc kubenswrapper[4932]: I0218 20:07:05.598041 4932 scope.go:117] "RemoveContainer" containerID="9bb9eedee5db3508051ad5cf9468f19b751623f5c59dfbe177da134d00b7fc1f" Feb 18 20:07:05 crc kubenswrapper[4932]: I0218 20:07:05.972731 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.045760 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-inventory\") pod \"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab\" (UID: \"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab\") " Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.045893 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-ssh-key-openstack-edpm-ipam\") pod \"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab\" (UID: \"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab\") " Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.046008 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8vst\" (UniqueName: \"kubernetes.io/projected/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-kube-api-access-b8vst\") pod \"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab\" (UID: \"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab\") " Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.052574 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-kube-api-access-b8vst" (OuterVolumeSpecName: "kube-api-access-b8vst") pod "6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab" (UID: "6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab"). InnerVolumeSpecName "kube-api-access-b8vst". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.076960 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab" (UID: "6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.077372 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-inventory" (OuterVolumeSpecName: "inventory") pod "6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab" (UID: "6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.149133 4932 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.149711 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.149776 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8vst\" (UniqueName: \"kubernetes.io/projected/6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab-kube-api-access-b8vst\") on node \"crc\" DevicePath \"\"" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.558586 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" event={"ID":"6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab","Type":"ContainerDied","Data":"93beb98692f78ee18e33929ce67ee50501cc2c2ec35b52dc31e5b3ff5a85e1d7"} Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.558624 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93beb98692f78ee18e33929ce67ee50501cc2c2ec35b52dc31e5b3ff5a85e1d7" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.558691 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ctdbp" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.563164 4932 generic.go:334] "Generic (PLEG): container finished" podID="9385117b-aef4-4fc9-9633-c237337beea2" containerID="11cc9253b8f8d4c0220f8c047b3abd5b613257b57e4fea0bbad60acfe7ffa9a2" exitCode=0 Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.563249 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hp6v9" event={"ID":"9385117b-aef4-4fc9-9633-c237337beea2","Type":"ContainerDied","Data":"11cc9253b8f8d4c0220f8c047b3abd5b613257b57e4fea0bbad60acfe7ffa9a2"} Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.671574 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qp4lv"] Feb 18 20:07:06 crc kubenswrapper[4932]: E0218 20:07:06.672363 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.672450 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.672713 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="6be3cc3f-c50e-4189-b5aa-ab4f211ed4ab" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.673629 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.676540 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.676714 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vjvmw" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.677079 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.677468 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.679704 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qp4lv"] Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.764944 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1f19857d-f085-411f-a08f-412d1173ed1c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qp4lv\" (UID: \"1f19857d-f085-411f-a08f-412d1173ed1c\") " pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.765016 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhhhw\" (UniqueName: \"kubernetes.io/projected/1f19857d-f085-411f-a08f-412d1173ed1c-kube-api-access-nhhhw\") pod \"ssh-known-hosts-edpm-deployment-qp4lv\" (UID: \"1f19857d-f085-411f-a08f-412d1173ed1c\") " pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.765048 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f19857d-f085-411f-a08f-412d1173ed1c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qp4lv\" (UID: \"1f19857d-f085-411f-a08f-412d1173ed1c\") " pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.866936 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1f19857d-f085-411f-a08f-412d1173ed1c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qp4lv\" (UID: \"1f19857d-f085-411f-a08f-412d1173ed1c\") " pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.867038 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhhhw\" (UniqueName: \"kubernetes.io/projected/1f19857d-f085-411f-a08f-412d1173ed1c-kube-api-access-nhhhw\") pod \"ssh-known-hosts-edpm-deployment-qp4lv\" (UID: \"1f19857d-f085-411f-a08f-412d1173ed1c\") " pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.867085 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f19857d-f085-411f-a08f-412d1173ed1c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qp4lv\" (UID: \"1f19857d-f085-411f-a08f-412d1173ed1c\") " pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.871580 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f19857d-f085-411f-a08f-412d1173ed1c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qp4lv\" (UID: \"1f19857d-f085-411f-a08f-412d1173ed1c\") " pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.879647 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1f19857d-f085-411f-a08f-412d1173ed1c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qp4lv\" (UID: \"1f19857d-f085-411f-a08f-412d1173ed1c\") " pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.884564 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhhhw\" (UniqueName: \"kubernetes.io/projected/1f19857d-f085-411f-a08f-412d1173ed1c-kube-api-access-nhhhw\") pod \"ssh-known-hosts-edpm-deployment-qp4lv\" (UID: \"1f19857d-f085-411f-a08f-412d1173ed1c\") " pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" Feb 18 20:07:06 crc kubenswrapper[4932]: I0218 20:07:06.998720 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" Feb 18 20:07:07 crc kubenswrapper[4932]: I0218 20:07:07.562481 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qp4lv"] Feb 18 20:07:07 crc kubenswrapper[4932]: I0218 20:07:07.577801 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hp6v9" event={"ID":"9385117b-aef4-4fc9-9633-c237337beea2","Type":"ContainerStarted","Data":"f6a06d6bfc83999ced0f1750f3d39e9318eae8ab06a24793ff5277b1ae2c6346"} Feb 18 20:07:07 crc kubenswrapper[4932]: I0218 20:07:07.580157 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" event={"ID":"1f19857d-f085-411f-a08f-412d1173ed1c","Type":"ContainerStarted","Data":"b76d6ccafd2c0451bf7aa7ed91aceca0b2f73e23637c2896e2b6163faeb9266a"} Feb 18 20:07:07 crc kubenswrapper[4932]: I0218 20:07:07.599500 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hp6v9" podStartSLOduration=3.177628936 podStartE2EDuration="5.599482537s" podCreationTimestamp="2026-02-18 20:07:02 +0000 UTC" firstStartedPulling="2026-02-18 20:07:04.54872277 +0000 UTC m=+1988.130677625" lastFinishedPulling="2026-02-18 20:07:06.970576381 +0000 UTC m=+1990.552531226" observedRunningTime="2026-02-18 20:07:07.593920039 +0000 UTC m=+1991.175874894" watchObservedRunningTime="2026-02-18 20:07:07.599482537 +0000 UTC m=+1991.181437382" Feb 18 20:07:08 crc kubenswrapper[4932]: I0218 20:07:08.596155 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" event={"ID":"1f19857d-f085-411f-a08f-412d1173ed1c","Type":"ContainerStarted","Data":"cb369d03d8bcd330fee85bc5a8dacb1da44e035006df4f2345401aa8ff8cca9a"} Feb 18 20:07:08 crc kubenswrapper[4932]: I0218 20:07:08.624844 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" podStartSLOduration=2.219164514 podStartE2EDuration="2.624817734s" podCreationTimestamp="2026-02-18 20:07:06 +0000 UTC" firstStartedPulling="2026-02-18 20:07:07.556345019 +0000 UTC m=+1991.138299874" lastFinishedPulling="2026-02-18 20:07:07.961998229 +0000 UTC m=+1991.543953094" observedRunningTime="2026-02-18 20:07:08.609985047 +0000 UTC m=+1992.191939902" watchObservedRunningTime="2026-02-18 20:07:08.624817734 +0000 UTC m=+1992.206772599" Feb 18 20:07:12 crc kubenswrapper[4932]: I0218 20:07:12.989898 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hp6v9" Feb 18 20:07:12 crc kubenswrapper[4932]: I0218 20:07:12.990331 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hp6v9" Feb 18 20:07:13 crc kubenswrapper[4932]: I0218 20:07:13.060410 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hp6v9" Feb 18 20:07:13 crc kubenswrapper[4932]: I0218 20:07:13.684417 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hp6v9" Feb 18 20:07:13 crc kubenswrapper[4932]: I0218 20:07:13.741890 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hp6v9"] Feb 18 20:07:15 crc kubenswrapper[4932]: I0218 20:07:15.661448 4932 generic.go:334] "Generic (PLEG): container finished" podID="1f19857d-f085-411f-a08f-412d1173ed1c" containerID="cb369d03d8bcd330fee85bc5a8dacb1da44e035006df4f2345401aa8ff8cca9a" exitCode=0 Feb 18 20:07:15 crc kubenswrapper[4932]: I0218 20:07:15.661582 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" event={"ID":"1f19857d-f085-411f-a08f-412d1173ed1c","Type":"ContainerDied","Data":"cb369d03d8bcd330fee85bc5a8dacb1da44e035006df4f2345401aa8ff8cca9a"} Feb 18 20:07:15 crc kubenswrapper[4932]: I0218 20:07:15.662026 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hp6v9" podUID="9385117b-aef4-4fc9-9633-c237337beea2" containerName="registry-server" containerID="cri-o://f6a06d6bfc83999ced0f1750f3d39e9318eae8ab06a24793ff5277b1ae2c6346" gracePeriod=2 Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.161013 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hp6v9" Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.266354 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sccvp\" (UniqueName: \"kubernetes.io/projected/9385117b-aef4-4fc9-9633-c237337beea2-kube-api-access-sccvp\") pod \"9385117b-aef4-4fc9-9633-c237337beea2\" (UID: \"9385117b-aef4-4fc9-9633-c237337beea2\") " Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.266725 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9385117b-aef4-4fc9-9633-c237337beea2-catalog-content\") pod \"9385117b-aef4-4fc9-9633-c237337beea2\" (UID: \"9385117b-aef4-4fc9-9633-c237337beea2\") " Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.266962 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9385117b-aef4-4fc9-9633-c237337beea2-utilities\") pod \"9385117b-aef4-4fc9-9633-c237337beea2\" (UID: \"9385117b-aef4-4fc9-9633-c237337beea2\") " Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.267757 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9385117b-aef4-4fc9-9633-c237337beea2-utilities" (OuterVolumeSpecName: "utilities") pod "9385117b-aef4-4fc9-9633-c237337beea2" (UID: "9385117b-aef4-4fc9-9633-c237337beea2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.268210 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9385117b-aef4-4fc9-9633-c237337beea2-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.274531 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9385117b-aef4-4fc9-9633-c237337beea2-kube-api-access-sccvp" (OuterVolumeSpecName: "kube-api-access-sccvp") pod "9385117b-aef4-4fc9-9633-c237337beea2" (UID: "9385117b-aef4-4fc9-9633-c237337beea2"). InnerVolumeSpecName "kube-api-access-sccvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.317075 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9385117b-aef4-4fc9-9633-c237337beea2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9385117b-aef4-4fc9-9633-c237337beea2" (UID: "9385117b-aef4-4fc9-9633-c237337beea2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.370117 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sccvp\" (UniqueName: \"kubernetes.io/projected/9385117b-aef4-4fc9-9633-c237337beea2-kube-api-access-sccvp\") on node \"crc\" DevicePath \"\"" Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.370262 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9385117b-aef4-4fc9-9633-c237337beea2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.672152 4932 generic.go:334] "Generic (PLEG): container finished" podID="9385117b-aef4-4fc9-9633-c237337beea2" containerID="f6a06d6bfc83999ced0f1750f3d39e9318eae8ab06a24793ff5277b1ae2c6346" exitCode=0 Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.672233 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hp6v9" Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.672250 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hp6v9" event={"ID":"9385117b-aef4-4fc9-9633-c237337beea2","Type":"ContainerDied","Data":"f6a06d6bfc83999ced0f1750f3d39e9318eae8ab06a24793ff5277b1ae2c6346"} Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.672505 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hp6v9" event={"ID":"9385117b-aef4-4fc9-9633-c237337beea2","Type":"ContainerDied","Data":"6ebce87b6b61ea80c41ffd1203cdea867546f05926dea452da2ed3b5a10dd57d"} Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.672524 4932 scope.go:117] "RemoveContainer" containerID="f6a06d6bfc83999ced0f1750f3d39e9318eae8ab06a24793ff5277b1ae2c6346" Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.721494 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hp6v9"] Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.721881 4932 scope.go:117] "RemoveContainer" containerID="11cc9253b8f8d4c0220f8c047b3abd5b613257b57e4fea0bbad60acfe7ffa9a2" Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.733475 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hp6v9"] Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.746519 4932 scope.go:117] "RemoveContainer" containerID="2de7d5cc8c9f7a412555c008a15c44dcfc3a3bb05057317508b1ece2c52103da" Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.805243 4932 scope.go:117] "RemoveContainer" containerID="f6a06d6bfc83999ced0f1750f3d39e9318eae8ab06a24793ff5277b1ae2c6346" Feb 18 20:07:16 crc kubenswrapper[4932]: E0218 20:07:16.805819 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6a06d6bfc83999ced0f1750f3d39e9318eae8ab06a24793ff5277b1ae2c6346\": container with ID starting with f6a06d6bfc83999ced0f1750f3d39e9318eae8ab06a24793ff5277b1ae2c6346 not found: ID does not exist" containerID="f6a06d6bfc83999ced0f1750f3d39e9318eae8ab06a24793ff5277b1ae2c6346" Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.805883 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6a06d6bfc83999ced0f1750f3d39e9318eae8ab06a24793ff5277b1ae2c6346"} err="failed to get container status \"f6a06d6bfc83999ced0f1750f3d39e9318eae8ab06a24793ff5277b1ae2c6346\": rpc error: code = NotFound desc = could not find container \"f6a06d6bfc83999ced0f1750f3d39e9318eae8ab06a24793ff5277b1ae2c6346\": container with ID starting with f6a06d6bfc83999ced0f1750f3d39e9318eae8ab06a24793ff5277b1ae2c6346 not found: ID does not exist" Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.805922 4932 scope.go:117] "RemoveContainer" containerID="11cc9253b8f8d4c0220f8c047b3abd5b613257b57e4fea0bbad60acfe7ffa9a2" Feb 18 20:07:16 crc kubenswrapper[4932]: E0218 20:07:16.809678 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11cc9253b8f8d4c0220f8c047b3abd5b613257b57e4fea0bbad60acfe7ffa9a2\": container with ID starting with 11cc9253b8f8d4c0220f8c047b3abd5b613257b57e4fea0bbad60acfe7ffa9a2 not found: ID does not exist" containerID="11cc9253b8f8d4c0220f8c047b3abd5b613257b57e4fea0bbad60acfe7ffa9a2" Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.809730 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11cc9253b8f8d4c0220f8c047b3abd5b613257b57e4fea0bbad60acfe7ffa9a2"} err="failed to get container status \"11cc9253b8f8d4c0220f8c047b3abd5b613257b57e4fea0bbad60acfe7ffa9a2\": rpc error: code = NotFound desc = could not find container \"11cc9253b8f8d4c0220f8c047b3abd5b613257b57e4fea0bbad60acfe7ffa9a2\": container with ID starting with 11cc9253b8f8d4c0220f8c047b3abd5b613257b57e4fea0bbad60acfe7ffa9a2 not found: ID does not exist" Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.809757 4932 scope.go:117] "RemoveContainer" containerID="2de7d5cc8c9f7a412555c008a15c44dcfc3a3bb05057317508b1ece2c52103da" Feb 18 20:07:16 crc kubenswrapper[4932]: E0218 20:07:16.810122 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2de7d5cc8c9f7a412555c008a15c44dcfc3a3bb05057317508b1ece2c52103da\": container with ID starting with 2de7d5cc8c9f7a412555c008a15c44dcfc3a3bb05057317508b1ece2c52103da not found: ID does not exist" containerID="2de7d5cc8c9f7a412555c008a15c44dcfc3a3bb05057317508b1ece2c52103da" Feb 18 20:07:16 crc kubenswrapper[4932]: I0218 20:07:16.810154 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2de7d5cc8c9f7a412555c008a15c44dcfc3a3bb05057317508b1ece2c52103da"} err="failed to get container status \"2de7d5cc8c9f7a412555c008a15c44dcfc3a3bb05057317508b1ece2c52103da\": rpc error: code = NotFound desc = could not find container \"2de7d5cc8c9f7a412555c008a15c44dcfc3a3bb05057317508b1ece2c52103da\": container with ID starting with 2de7d5cc8c9f7a412555c008a15c44dcfc3a3bb05057317508b1ece2c52103da not found: ID does not exist" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.098770 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.184294 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1f19857d-f085-411f-a08f-412d1173ed1c-inventory-0\") pod \"1f19857d-f085-411f-a08f-412d1173ed1c\" (UID: \"1f19857d-f085-411f-a08f-412d1173ed1c\") " Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.184370 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f19857d-f085-411f-a08f-412d1173ed1c-ssh-key-openstack-edpm-ipam\") pod \"1f19857d-f085-411f-a08f-412d1173ed1c\" (UID: \"1f19857d-f085-411f-a08f-412d1173ed1c\") " Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.184416 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhhhw\" (UniqueName: \"kubernetes.io/projected/1f19857d-f085-411f-a08f-412d1173ed1c-kube-api-access-nhhhw\") pod \"1f19857d-f085-411f-a08f-412d1173ed1c\" (UID: \"1f19857d-f085-411f-a08f-412d1173ed1c\") " Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.190550 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f19857d-f085-411f-a08f-412d1173ed1c-kube-api-access-nhhhw" (OuterVolumeSpecName: "kube-api-access-nhhhw") pod "1f19857d-f085-411f-a08f-412d1173ed1c" (UID: "1f19857d-f085-411f-a08f-412d1173ed1c"). InnerVolumeSpecName "kube-api-access-nhhhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.192082 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9385117b-aef4-4fc9-9633-c237337beea2" path="/var/lib/kubelet/pods/9385117b-aef4-4fc9-9633-c237337beea2/volumes" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.217232 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f19857d-f085-411f-a08f-412d1173ed1c-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "1f19857d-f085-411f-a08f-412d1173ed1c" (UID: "1f19857d-f085-411f-a08f-412d1173ed1c"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.228344 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f19857d-f085-411f-a08f-412d1173ed1c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1f19857d-f085-411f-a08f-412d1173ed1c" (UID: "1f19857d-f085-411f-a08f-412d1173ed1c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.287300 4932 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1f19857d-f085-411f-a08f-412d1173ed1c-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.287696 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f19857d-f085-411f-a08f-412d1173ed1c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.287710 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhhhw\" (UniqueName: \"kubernetes.io/projected/1f19857d-f085-411f-a08f-412d1173ed1c-kube-api-access-nhhhw\") on node \"crc\" DevicePath \"\"" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.688609 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" event={"ID":"1f19857d-f085-411f-a08f-412d1173ed1c","Type":"ContainerDied","Data":"b76d6ccafd2c0451bf7aa7ed91aceca0b2f73e23637c2896e2b6163faeb9266a"} Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.688655 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b76d6ccafd2c0451bf7aa7ed91aceca0b2f73e23637c2896e2b6163faeb9266a" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.688717 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qp4lv" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.809224 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck"] Feb 18 20:07:17 crc kubenswrapper[4932]: E0218 20:07:17.819975 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9385117b-aef4-4fc9-9633-c237337beea2" containerName="extract-utilities" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.820008 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="9385117b-aef4-4fc9-9633-c237337beea2" containerName="extract-utilities" Feb 18 20:07:17 crc kubenswrapper[4932]: E0218 20:07:17.820071 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f19857d-f085-411f-a08f-412d1173ed1c" containerName="ssh-known-hosts-edpm-deployment" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.820081 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f19857d-f085-411f-a08f-412d1173ed1c" containerName="ssh-known-hosts-edpm-deployment" Feb 18 20:07:17 crc kubenswrapper[4932]: E0218 20:07:17.820135 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9385117b-aef4-4fc9-9633-c237337beea2" containerName="extract-content" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.820144 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="9385117b-aef4-4fc9-9633-c237337beea2" containerName="extract-content" Feb 18 20:07:17 crc kubenswrapper[4932]: E0218 20:07:17.820168 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9385117b-aef4-4fc9-9633-c237337beea2" containerName="registry-server" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.820203 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="9385117b-aef4-4fc9-9633-c237337beea2" containerName="registry-server" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.820610 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f19857d-f085-411f-a08f-412d1173ed1c" containerName="ssh-known-hosts-edpm-deployment" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.820643 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="9385117b-aef4-4fc9-9633-c237337beea2" containerName="registry-server" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.823731 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck"] Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.823902 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.827121 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.827414 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vjvmw" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.827693 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.828031 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.900262 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtkkd\" (UniqueName: \"kubernetes.io/projected/86713106-5952-4409-b655-9f87008c2050-kube-api-access-vtkkd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t7bck\" (UID: \"86713106-5952-4409-b655-9f87008c2050\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.900438 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86713106-5952-4409-b655-9f87008c2050-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t7bck\" (UID: \"86713106-5952-4409-b655-9f87008c2050\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" Feb 18 20:07:17 crc kubenswrapper[4932]: I0218 20:07:17.900472 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86713106-5952-4409-b655-9f87008c2050-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t7bck\" (UID: \"86713106-5952-4409-b655-9f87008c2050\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" Feb 18 20:07:18 crc kubenswrapper[4932]: I0218 20:07:18.002580 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86713106-5952-4409-b655-9f87008c2050-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t7bck\" (UID: \"86713106-5952-4409-b655-9f87008c2050\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" Feb 18 20:07:18 crc kubenswrapper[4932]: I0218 20:07:18.002653 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86713106-5952-4409-b655-9f87008c2050-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t7bck\" (UID: \"86713106-5952-4409-b655-9f87008c2050\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" Feb 18 20:07:18 crc kubenswrapper[4932]: I0218 20:07:18.002704 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtkkd\" (UniqueName: \"kubernetes.io/projected/86713106-5952-4409-b655-9f87008c2050-kube-api-access-vtkkd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t7bck\" (UID: \"86713106-5952-4409-b655-9f87008c2050\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" Feb 18 20:07:18 crc kubenswrapper[4932]: I0218 20:07:18.008748 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86713106-5952-4409-b655-9f87008c2050-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t7bck\" (UID: \"86713106-5952-4409-b655-9f87008c2050\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" Feb 18 20:07:18 crc kubenswrapper[4932]: I0218 20:07:18.015692 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86713106-5952-4409-b655-9f87008c2050-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t7bck\" (UID: \"86713106-5952-4409-b655-9f87008c2050\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" Feb 18 20:07:18 crc kubenswrapper[4932]: I0218 20:07:18.020148 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtkkd\" (UniqueName: \"kubernetes.io/projected/86713106-5952-4409-b655-9f87008c2050-kube-api-access-vtkkd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-t7bck\" (UID: \"86713106-5952-4409-b655-9f87008c2050\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" Feb 18 20:07:18 crc kubenswrapper[4932]: I0218 20:07:18.144024 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" Feb 18 20:07:18 crc kubenswrapper[4932]: I0218 20:07:18.671142 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck"] Feb 18 20:07:18 crc kubenswrapper[4932]: I0218 20:07:18.703127 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" event={"ID":"86713106-5952-4409-b655-9f87008c2050","Type":"ContainerStarted","Data":"ed88fc315bbf365fe36e0bdf4960a060dada12ee3c9b91e277fb93921e03f794"} Feb 18 20:07:19 crc kubenswrapper[4932]: I0218 20:07:19.715139 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" event={"ID":"86713106-5952-4409-b655-9f87008c2050","Type":"ContainerStarted","Data":"c1a20ccff30702f4bbf7da383ded31b7d96054a5d70fce66b5e248ac129367ad"} Feb 18 20:07:19 crc kubenswrapper[4932]: I0218 20:07:19.735951 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" podStartSLOduration=2.336633445 podStartE2EDuration="2.735930618s" podCreationTimestamp="2026-02-18 20:07:17 +0000 UTC" firstStartedPulling="2026-02-18 20:07:18.67383061 +0000 UTC m=+2002.255785505" lastFinishedPulling="2026-02-18 20:07:19.073127833 +0000 UTC m=+2002.655082678" observedRunningTime="2026-02-18 20:07:19.732533974 +0000 UTC m=+2003.314488819" watchObservedRunningTime="2026-02-18 20:07:19.735930618 +0000 UTC m=+2003.317885463" Feb 18 20:07:27 crc kubenswrapper[4932]: I0218 20:07:27.606971 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:07:27 crc kubenswrapper[4932]: I0218 20:07:27.608108 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:07:27 crc kubenswrapper[4932]: I0218 20:07:27.805106 4932 generic.go:334] "Generic (PLEG): container finished" podID="86713106-5952-4409-b655-9f87008c2050" containerID="c1a20ccff30702f4bbf7da383ded31b7d96054a5d70fce66b5e248ac129367ad" exitCode=0 Feb 18 20:07:27 crc kubenswrapper[4932]: I0218 20:07:27.805158 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" event={"ID":"86713106-5952-4409-b655-9f87008c2050","Type":"ContainerDied","Data":"c1a20ccff30702f4bbf7da383ded31b7d96054a5d70fce66b5e248ac129367ad"} Feb 18 20:07:29 crc kubenswrapper[4932]: I0218 20:07:29.312251 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" Feb 18 20:07:29 crc kubenswrapper[4932]: I0218 20:07:29.440034 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtkkd\" (UniqueName: \"kubernetes.io/projected/86713106-5952-4409-b655-9f87008c2050-kube-api-access-vtkkd\") pod \"86713106-5952-4409-b655-9f87008c2050\" (UID: \"86713106-5952-4409-b655-9f87008c2050\") " Feb 18 20:07:29 crc kubenswrapper[4932]: I0218 20:07:29.440767 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86713106-5952-4409-b655-9f87008c2050-ssh-key-openstack-edpm-ipam\") pod \"86713106-5952-4409-b655-9f87008c2050\" (UID: \"86713106-5952-4409-b655-9f87008c2050\") " Feb 18 20:07:29 crc kubenswrapper[4932]: I0218 20:07:29.441067 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86713106-5952-4409-b655-9f87008c2050-inventory\") pod \"86713106-5952-4409-b655-9f87008c2050\" (UID: \"86713106-5952-4409-b655-9f87008c2050\") " Feb 18 20:07:29 crc kubenswrapper[4932]: I0218 20:07:29.450373 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86713106-5952-4409-b655-9f87008c2050-kube-api-access-vtkkd" (OuterVolumeSpecName: "kube-api-access-vtkkd") pod "86713106-5952-4409-b655-9f87008c2050" (UID: "86713106-5952-4409-b655-9f87008c2050"). InnerVolumeSpecName "kube-api-access-vtkkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:07:29 crc kubenswrapper[4932]: I0218 20:07:29.495048 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86713106-5952-4409-b655-9f87008c2050-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "86713106-5952-4409-b655-9f87008c2050" (UID: "86713106-5952-4409-b655-9f87008c2050"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:07:29 crc kubenswrapper[4932]: I0218 20:07:29.497717 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86713106-5952-4409-b655-9f87008c2050-inventory" (OuterVolumeSpecName: "inventory") pod "86713106-5952-4409-b655-9f87008c2050" (UID: "86713106-5952-4409-b655-9f87008c2050"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:07:29 crc kubenswrapper[4932]: I0218 20:07:29.544678 4932 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86713106-5952-4409-b655-9f87008c2050-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:07:29 crc kubenswrapper[4932]: I0218 20:07:29.544710 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtkkd\" (UniqueName: \"kubernetes.io/projected/86713106-5952-4409-b655-9f87008c2050-kube-api-access-vtkkd\") on node \"crc\" DevicePath \"\"" Feb 18 20:07:29 crc kubenswrapper[4932]: I0218 20:07:29.544723 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86713106-5952-4409-b655-9f87008c2050-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:07:29 crc kubenswrapper[4932]: I0218 20:07:29.831028 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" event={"ID":"86713106-5952-4409-b655-9f87008c2050","Type":"ContainerDied","Data":"ed88fc315bbf365fe36e0bdf4960a060dada12ee3c9b91e277fb93921e03f794"} Feb 18 20:07:29 crc kubenswrapper[4932]: I0218 20:07:29.831088 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed88fc315bbf365fe36e0bdf4960a060dada12ee3c9b91e277fb93921e03f794" Feb 18 20:07:29 crc kubenswrapper[4932]: I0218 20:07:29.831166 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-t7bck" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.011402 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn"] Feb 18 20:07:30 crc kubenswrapper[4932]: E0218 20:07:30.012345 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86713106-5952-4409-b655-9f87008c2050" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.012361 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="86713106-5952-4409-b655-9f87008c2050" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.012597 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="86713106-5952-4409-b655-9f87008c2050" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.013572 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.016380 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.016516 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.018983 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.022207 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn"] Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.049100 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vjvmw" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.156982 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn\" (UID: \"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.157048 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn\" (UID: \"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.157242 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2fr4\" (UniqueName: \"kubernetes.io/projected/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-kube-api-access-k2fr4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn\" (UID: \"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.258851 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2fr4\" (UniqueName: \"kubernetes.io/projected/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-kube-api-access-k2fr4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn\" (UID: \"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.259004 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn\" (UID: \"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.259040 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn\" (UID: \"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.263083 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn\" (UID: \"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.264794 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn\" (UID: \"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.277589 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2fr4\" (UniqueName: \"kubernetes.io/projected/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-kube-api-access-k2fr4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn\" (UID: \"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.372679 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.895457 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:07:30 crc kubenswrapper[4932]: I0218 20:07:30.901324 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn"] Feb 18 20:07:31 crc kubenswrapper[4932]: I0218 20:07:31.861616 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" event={"ID":"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96","Type":"ContainerStarted","Data":"4600e294e7206f35dd2b6e87432a29eeca386dd2a6f12b3cded6ac249c1945c9"} Feb 18 20:07:31 crc kubenswrapper[4932]: I0218 20:07:31.862041 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" event={"ID":"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96","Type":"ContainerStarted","Data":"d6f8dba346309a41de3f61f5fa8513166c779bc22740cd9e75ae130e7af4b053"} Feb 18 20:07:31 crc kubenswrapper[4932]: I0218 20:07:31.882585 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" podStartSLOduration=2.4061503269999998 podStartE2EDuration="2.882567268s" podCreationTimestamp="2026-02-18 20:07:29 +0000 UTC" firstStartedPulling="2026-02-18 20:07:30.895086998 +0000 UTC m=+2014.477041853" lastFinishedPulling="2026-02-18 20:07:31.371503939 +0000 UTC m=+2014.953458794" observedRunningTime="2026-02-18 20:07:31.879297227 +0000 UTC m=+2015.461252132" watchObservedRunningTime="2026-02-18 20:07:31.882567268 +0000 UTC m=+2015.464522113" Feb 18 20:07:40 crc kubenswrapper[4932]: I0218 20:07:40.955223 4932 generic.go:334] "Generic (PLEG): container finished" podID="7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96" containerID="4600e294e7206f35dd2b6e87432a29eeca386dd2a6f12b3cded6ac249c1945c9" exitCode=0 Feb 18 20:07:40 crc kubenswrapper[4932]: I0218 20:07:40.955350 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" event={"ID":"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96","Type":"ContainerDied","Data":"4600e294e7206f35dd2b6e87432a29eeca386dd2a6f12b3cded6ac249c1945c9"} Feb 18 20:07:42 crc kubenswrapper[4932]: I0218 20:07:42.452361 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" Feb 18 20:07:42 crc kubenswrapper[4932]: I0218 20:07:42.541295 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2fr4\" (UniqueName: \"kubernetes.io/projected/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-kube-api-access-k2fr4\") pod \"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96\" (UID: \"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96\") " Feb 18 20:07:42 crc kubenswrapper[4932]: I0218 20:07:42.541479 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-ssh-key-openstack-edpm-ipam\") pod \"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96\" (UID: \"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96\") " Feb 18 20:07:42 crc kubenswrapper[4932]: I0218 20:07:42.541563 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-inventory\") pod \"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96\" (UID: \"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96\") " Feb 18 20:07:42 crc kubenswrapper[4932]: I0218 20:07:42.560623 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-kube-api-access-k2fr4" (OuterVolumeSpecName: "kube-api-access-k2fr4") pod "7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96" (UID: "7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96"). InnerVolumeSpecName "kube-api-access-k2fr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:07:42 crc kubenswrapper[4932]: I0218 20:07:42.572348 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-inventory" (OuterVolumeSpecName: "inventory") pod "7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96" (UID: "7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:07:42 crc kubenswrapper[4932]: I0218 20:07:42.576132 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96" (UID: "7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:07:42 crc kubenswrapper[4932]: I0218 20:07:42.644086 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2fr4\" (UniqueName: \"kubernetes.io/projected/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-kube-api-access-k2fr4\") on node \"crc\" DevicePath \"\"" Feb 18 20:07:42 crc kubenswrapper[4932]: I0218 20:07:42.644131 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:07:42 crc kubenswrapper[4932]: I0218 20:07:42.644144 4932 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:07:42 crc kubenswrapper[4932]: I0218 20:07:42.981334 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" event={"ID":"7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96","Type":"ContainerDied","Data":"d6f8dba346309a41de3f61f5fa8513166c779bc22740cd9e75ae130e7af4b053"} Feb 18 20:07:42 crc kubenswrapper[4932]: I0218 20:07:42.981607 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6f8dba346309a41de3f61f5fa8513166c779bc22740cd9e75ae130e7af4b053" Feb 18 20:07:42 crc kubenswrapper[4932]: I0218 20:07:42.981428 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pdwbn" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.116601 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq"] Feb 18 20:07:43 crc kubenswrapper[4932]: E0218 20:07:43.117015 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.117034 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.117258 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a3208f2-3d3c-4ad9-8ab1-d9fec975fe96" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.117901 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.120005 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.121382 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.121746 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.121766 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.121774 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.123187 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.126558 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vjvmw" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.127071 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.148285 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq"] Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.257671 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.257729 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.257766 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.257805 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.257830 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.257910 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.258025 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.258141 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k56x8\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-kube-api-access-k56x8\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.258336 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.258373 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.258404 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.258528 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.258561 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.258602 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.361234 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.361651 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.361880 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k56x8\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-kube-api-access-k56x8\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.362147 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.362331 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.362499 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.362723 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.362857 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.363018 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.363342 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.363528 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.363699 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.363947 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.364224 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.368359 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.368429 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.368587 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.369288 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.370389 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.370874 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.371146 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.371975 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.372130 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.372836 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.374061 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.375774 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.376637 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.384013 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k56x8\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-kube-api-access-k56x8\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-szztq\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:43 crc kubenswrapper[4932]: I0218 20:07:43.449472 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:07:44 crc kubenswrapper[4932]: I0218 20:07:44.078348 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq"] Feb 18 20:07:45 crc kubenswrapper[4932]: I0218 20:07:45.010225 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" event={"ID":"89184ceb-9c72-46ed-ae3f-27228af58cfc","Type":"ContainerStarted","Data":"1370d89c203eaedd57a26f8365c4d4c46629e61cc2bc1b7a5ded07e0ca770571"} Feb 18 20:07:45 crc kubenswrapper[4932]: I0218 20:07:45.010869 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" event={"ID":"89184ceb-9c72-46ed-ae3f-27228af58cfc","Type":"ContainerStarted","Data":"3f629ae06fe7522c8ae80af28f92eb58a1a8477afcd7e49d5bb7cbec8efa6b05"} Feb 18 20:07:45 crc kubenswrapper[4932]: I0218 20:07:45.036350 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" podStartSLOduration=1.6338945759999999 podStartE2EDuration="2.036333427s" podCreationTimestamp="2026-02-18 20:07:43 +0000 UTC" firstStartedPulling="2026-02-18 20:07:44.085208656 +0000 UTC m=+2027.667163511" lastFinishedPulling="2026-02-18 20:07:44.487647517 +0000 UTC m=+2028.069602362" observedRunningTime="2026-02-18 20:07:45.03360699 +0000 UTC m=+2028.615561835" watchObservedRunningTime="2026-02-18 20:07:45.036333427 +0000 UTC m=+2028.618288262" Feb 18 20:07:57 crc kubenswrapper[4932]: I0218 20:07:57.606431 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:07:57 crc kubenswrapper[4932]: I0218 20:07:57.607039 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:08:21 crc kubenswrapper[4932]: I0218 20:08:21.339615 4932 generic.go:334] "Generic (PLEG): container finished" podID="89184ceb-9c72-46ed-ae3f-27228af58cfc" containerID="1370d89c203eaedd57a26f8365c4d4c46629e61cc2bc1b7a5ded07e0ca770571" exitCode=0 Feb 18 20:08:21 crc kubenswrapper[4932]: I0218 20:08:21.339674 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" event={"ID":"89184ceb-9c72-46ed-ae3f-27228af58cfc","Type":"ContainerDied","Data":"1370d89c203eaedd57a26f8365c4d4c46629e61cc2bc1b7a5ded07e0ca770571"} Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.814421 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.939333 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-inventory\") pod \"89184ceb-9c72-46ed-ae3f-27228af58cfc\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.939377 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k56x8\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-kube-api-access-k56x8\") pod \"89184ceb-9c72-46ed-ae3f-27228af58cfc\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.939420 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-ssh-key-openstack-edpm-ipam\") pod \"89184ceb-9c72-46ed-ae3f-27228af58cfc\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.939455 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-ovn-default-certs-0\") pod \"89184ceb-9c72-46ed-ae3f-27228af58cfc\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.939477 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"89184ceb-9c72-46ed-ae3f-27228af58cfc\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.939501 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-ovn-combined-ca-bundle\") pod \"89184ceb-9c72-46ed-ae3f-27228af58cfc\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.939536 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-neutron-metadata-combined-ca-bundle\") pod \"89184ceb-9c72-46ed-ae3f-27228af58cfc\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.939553 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-nova-combined-ca-bundle\") pod \"89184ceb-9c72-46ed-ae3f-27228af58cfc\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.939584 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-telemetry-combined-ca-bundle\") pod \"89184ceb-9c72-46ed-ae3f-27228af58cfc\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.939603 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-bootstrap-combined-ca-bundle\") pod \"89184ceb-9c72-46ed-ae3f-27228af58cfc\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.939643 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-libvirt-combined-ca-bundle\") pod \"89184ceb-9c72-46ed-ae3f-27228af58cfc\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.939681 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"89184ceb-9c72-46ed-ae3f-27228af58cfc\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.939731 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"89184ceb-9c72-46ed-ae3f-27228af58cfc\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.939751 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-repo-setup-combined-ca-bundle\") pod \"89184ceb-9c72-46ed-ae3f-27228af58cfc\" (UID: \"89184ceb-9c72-46ed-ae3f-27228af58cfc\") " Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.945110 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "89184ceb-9c72-46ed-ae3f-27228af58cfc" (UID: "89184ceb-9c72-46ed-ae3f-27228af58cfc"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.945472 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "89184ceb-9c72-46ed-ae3f-27228af58cfc" (UID: "89184ceb-9c72-46ed-ae3f-27228af58cfc"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.946299 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "89184ceb-9c72-46ed-ae3f-27228af58cfc" (UID: "89184ceb-9c72-46ed-ae3f-27228af58cfc"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.947010 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "89184ceb-9c72-46ed-ae3f-27228af58cfc" (UID: "89184ceb-9c72-46ed-ae3f-27228af58cfc"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.948074 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "89184ceb-9c72-46ed-ae3f-27228af58cfc" (UID: "89184ceb-9c72-46ed-ae3f-27228af58cfc"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.948827 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "89184ceb-9c72-46ed-ae3f-27228af58cfc" (UID: "89184ceb-9c72-46ed-ae3f-27228af58cfc"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.949060 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "89184ceb-9c72-46ed-ae3f-27228af58cfc" (UID: "89184ceb-9c72-46ed-ae3f-27228af58cfc"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.949738 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "89184ceb-9c72-46ed-ae3f-27228af58cfc" (UID: "89184ceb-9c72-46ed-ae3f-27228af58cfc"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.950868 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "89184ceb-9c72-46ed-ae3f-27228af58cfc" (UID: "89184ceb-9c72-46ed-ae3f-27228af58cfc"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.951832 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "89184ceb-9c72-46ed-ae3f-27228af58cfc" (UID: "89184ceb-9c72-46ed-ae3f-27228af58cfc"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.952085 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "89184ceb-9c72-46ed-ae3f-27228af58cfc" (UID: "89184ceb-9c72-46ed-ae3f-27228af58cfc"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.958945 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-kube-api-access-k56x8" (OuterVolumeSpecName: "kube-api-access-k56x8") pod "89184ceb-9c72-46ed-ae3f-27228af58cfc" (UID: "89184ceb-9c72-46ed-ae3f-27228af58cfc"). InnerVolumeSpecName "kube-api-access-k56x8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.979679 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "89184ceb-9c72-46ed-ae3f-27228af58cfc" (UID: "89184ceb-9c72-46ed-ae3f-27228af58cfc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:08:22 crc kubenswrapper[4932]: I0218 20:08:22.981635 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-inventory" (OuterVolumeSpecName: "inventory") pod "89184ceb-9c72-46ed-ae3f-27228af58cfc" (UID: "89184ceb-9c72-46ed-ae3f-27228af58cfc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.042682 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.042735 4932 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.042754 4932 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.042773 4932 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.042788 4932 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.042800 4932 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.042815 4932 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.042829 4932 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.042840 4932 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.042854 4932 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.042868 4932 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.042912 4932 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.042926 4932 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89184ceb-9c72-46ed-ae3f-27228af58cfc-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.042937 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k56x8\" (UniqueName: \"kubernetes.io/projected/89184ceb-9c72-46ed-ae3f-27228af58cfc-kube-api-access-k56x8\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.360284 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" event={"ID":"89184ceb-9c72-46ed-ae3f-27228af58cfc","Type":"ContainerDied","Data":"3f629ae06fe7522c8ae80af28f92eb58a1a8477afcd7e49d5bb7cbec8efa6b05"} Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.360327 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f629ae06fe7522c8ae80af28f92eb58a1a8477afcd7e49d5bb7cbec8efa6b05" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.360382 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-szztq" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.488920 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962"] Feb 18 20:08:23 crc kubenswrapper[4932]: E0218 20:08:23.489344 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89184ceb-9c72-46ed-ae3f-27228af58cfc" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.489361 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="89184ceb-9c72-46ed-ae3f-27228af58cfc" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.489553 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="89184ceb-9c72-46ed-ae3f-27228af58cfc" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.490223 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.494798 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.495033 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.495033 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.495222 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.496477 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vjvmw" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.519552 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962"] Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.560402 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wp962\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.560466 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wp962\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.560503 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p667\" (UniqueName: \"kubernetes.io/projected/9c4aa436-f356-454c-b810-66e7cffe0c32-kube-api-access-8p667\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wp962\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.560552 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9c4aa436-f356-454c-b810-66e7cffe0c32-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wp962\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.560576 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wp962\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.662627 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wp962\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.662668 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wp962\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.662697 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p667\" (UniqueName: \"kubernetes.io/projected/9c4aa436-f356-454c-b810-66e7cffe0c32-kube-api-access-8p667\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wp962\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.662734 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9c4aa436-f356-454c-b810-66e7cffe0c32-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wp962\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.662754 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wp962\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.664297 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9c4aa436-f356-454c-b810-66e7cffe0c32-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wp962\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.667560 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wp962\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.670334 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wp962\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.681653 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wp962\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.682539 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p667\" (UniqueName: \"kubernetes.io/projected/9c4aa436-f356-454c-b810-66e7cffe0c32-kube-api-access-8p667\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wp962\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:08:23 crc kubenswrapper[4932]: I0218 20:08:23.814410 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:08:24 crc kubenswrapper[4932]: I0218 20:08:24.352006 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962"] Feb 18 20:08:24 crc kubenswrapper[4932]: I0218 20:08:24.371745 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" event={"ID":"9c4aa436-f356-454c-b810-66e7cffe0c32","Type":"ContainerStarted","Data":"1caca805311938cfc09992bdd1861fae6f71210dd49992178060515fb60b5a42"} Feb 18 20:08:25 crc kubenswrapper[4932]: I0218 20:08:25.384808 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" event={"ID":"9c4aa436-f356-454c-b810-66e7cffe0c32","Type":"ContainerStarted","Data":"7bfce2eae3e52d734bba86da5ba5caa23d72727cfec425013bf472191821e900"} Feb 18 20:08:25 crc kubenswrapper[4932]: I0218 20:08:25.414835 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" podStartSLOduration=1.922954912 podStartE2EDuration="2.414814735s" podCreationTimestamp="2026-02-18 20:08:23 +0000 UTC" firstStartedPulling="2026-02-18 20:08:24.354862842 +0000 UTC m=+2067.936817687" lastFinishedPulling="2026-02-18 20:08:24.846722665 +0000 UTC m=+2068.428677510" observedRunningTime="2026-02-18 20:08:25.407273578 +0000 UTC m=+2068.989228433" watchObservedRunningTime="2026-02-18 20:08:25.414814735 +0000 UTC m=+2068.996769580" Feb 18 20:08:27 crc kubenswrapper[4932]: I0218 20:08:27.606318 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:08:27 crc kubenswrapper[4932]: I0218 20:08:27.606876 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:08:27 crc kubenswrapper[4932]: I0218 20:08:27.606923 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 20:08:27 crc kubenswrapper[4932]: I0218 20:08:27.607649 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"93b2aadde96a1cb53f394f160a8c65ff537540cf335aacf73c90625c7fb96dd4"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:08:27 crc kubenswrapper[4932]: I0218 20:08:27.607704 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://93b2aadde96a1cb53f394f160a8c65ff537540cf335aacf73c90625c7fb96dd4" gracePeriod=600 Feb 18 20:08:28 crc kubenswrapper[4932]: I0218 20:08:28.413988 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="93b2aadde96a1cb53f394f160a8c65ff537540cf335aacf73c90625c7fb96dd4" exitCode=0 Feb 18 20:08:28 crc kubenswrapper[4932]: I0218 20:08:28.414079 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"93b2aadde96a1cb53f394f160a8c65ff537540cf335aacf73c90625c7fb96dd4"} Feb 18 20:08:28 crc kubenswrapper[4932]: I0218 20:08:28.414812 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f"} Feb 18 20:08:28 crc kubenswrapper[4932]: I0218 20:08:28.414870 4932 scope.go:117] "RemoveContainer" containerID="c6c1ef934a6fa657732f6bd53a7e75ee42f8e80b90893aa3ead20a440dde446d" Feb 18 20:08:44 crc kubenswrapper[4932]: I0218 20:08:44.048421 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4rc9w"] Feb 18 20:08:44 crc kubenswrapper[4932]: I0218 20:08:44.052240 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4rc9w" Feb 18 20:08:44 crc kubenswrapper[4932]: I0218 20:08:44.058540 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4rc9w"] Feb 18 20:08:44 crc kubenswrapper[4932]: I0218 20:08:44.093194 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86lx2\" (UniqueName: \"kubernetes.io/projected/bc336435-b073-4c36-91f6-159485fd9213-kube-api-access-86lx2\") pod \"community-operators-4rc9w\" (UID: \"bc336435-b073-4c36-91f6-159485fd9213\") " pod="openshift-marketplace/community-operators-4rc9w" Feb 18 20:08:44 crc kubenswrapper[4932]: I0218 20:08:44.093555 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc336435-b073-4c36-91f6-159485fd9213-catalog-content\") pod \"community-operators-4rc9w\" (UID: \"bc336435-b073-4c36-91f6-159485fd9213\") " pod="openshift-marketplace/community-operators-4rc9w" Feb 18 20:08:44 crc kubenswrapper[4932]: I0218 20:08:44.093609 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc336435-b073-4c36-91f6-159485fd9213-utilities\") pod \"community-operators-4rc9w\" (UID: \"bc336435-b073-4c36-91f6-159485fd9213\") " pod="openshift-marketplace/community-operators-4rc9w" Feb 18 20:08:44 crc kubenswrapper[4932]: I0218 20:08:44.195254 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86lx2\" (UniqueName: \"kubernetes.io/projected/bc336435-b073-4c36-91f6-159485fd9213-kube-api-access-86lx2\") pod \"community-operators-4rc9w\" (UID: \"bc336435-b073-4c36-91f6-159485fd9213\") " pod="openshift-marketplace/community-operators-4rc9w" Feb 18 20:08:44 crc kubenswrapper[4932]: I0218 20:08:44.195417 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc336435-b073-4c36-91f6-159485fd9213-catalog-content\") pod \"community-operators-4rc9w\" (UID: \"bc336435-b073-4c36-91f6-159485fd9213\") " pod="openshift-marketplace/community-operators-4rc9w" Feb 18 20:08:44 crc kubenswrapper[4932]: I0218 20:08:44.195453 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc336435-b073-4c36-91f6-159485fd9213-utilities\") pod \"community-operators-4rc9w\" (UID: \"bc336435-b073-4c36-91f6-159485fd9213\") " pod="openshift-marketplace/community-operators-4rc9w" Feb 18 20:08:44 crc kubenswrapper[4932]: I0218 20:08:44.195879 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc336435-b073-4c36-91f6-159485fd9213-utilities\") pod \"community-operators-4rc9w\" (UID: \"bc336435-b073-4c36-91f6-159485fd9213\") " pod="openshift-marketplace/community-operators-4rc9w" Feb 18 20:08:44 crc kubenswrapper[4932]: I0218 20:08:44.196042 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc336435-b073-4c36-91f6-159485fd9213-catalog-content\") pod \"community-operators-4rc9w\" (UID: \"bc336435-b073-4c36-91f6-159485fd9213\") " pod="openshift-marketplace/community-operators-4rc9w" Feb 18 20:08:44 crc kubenswrapper[4932]: I0218 20:08:44.219131 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86lx2\" (UniqueName: \"kubernetes.io/projected/bc336435-b073-4c36-91f6-159485fd9213-kube-api-access-86lx2\") pod \"community-operators-4rc9w\" (UID: \"bc336435-b073-4c36-91f6-159485fd9213\") " pod="openshift-marketplace/community-operators-4rc9w" Feb 18 20:08:44 crc kubenswrapper[4932]: I0218 20:08:44.402191 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4rc9w" Feb 18 20:08:44 crc kubenswrapper[4932]: I0218 20:08:44.969056 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4rc9w"] Feb 18 20:08:44 crc kubenswrapper[4932]: W0218 20:08:44.975566 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc336435_b073_4c36_91f6_159485fd9213.slice/crio-b2082728b78a745b64c12b43c60f4a9ceb4e0b21b7b5e961de2814b0437eb84e WatchSource:0}: Error finding container b2082728b78a745b64c12b43c60f4a9ceb4e0b21b7b5e961de2814b0437eb84e: Status 404 returned error can't find the container with id b2082728b78a745b64c12b43c60f4a9ceb4e0b21b7b5e961de2814b0437eb84e Feb 18 20:08:45 crc kubenswrapper[4932]: I0218 20:08:45.607668 4932 generic.go:334] "Generic (PLEG): container finished" podID="bc336435-b073-4c36-91f6-159485fd9213" containerID="e99c9e5505be48687e8d1ca9827e00f5b76f46976caf77f82360579460ae28b2" exitCode=0 Feb 18 20:08:45 crc kubenswrapper[4932]: I0218 20:08:45.607764 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4rc9w" event={"ID":"bc336435-b073-4c36-91f6-159485fd9213","Type":"ContainerDied","Data":"e99c9e5505be48687e8d1ca9827e00f5b76f46976caf77f82360579460ae28b2"} Feb 18 20:08:45 crc kubenswrapper[4932]: I0218 20:08:45.608042 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4rc9w" event={"ID":"bc336435-b073-4c36-91f6-159485fd9213","Type":"ContainerStarted","Data":"b2082728b78a745b64c12b43c60f4a9ceb4e0b21b7b5e961de2814b0437eb84e"} Feb 18 20:08:47 crc kubenswrapper[4932]: I0218 20:08:47.630710 4932 generic.go:334] "Generic (PLEG): container finished" podID="bc336435-b073-4c36-91f6-159485fd9213" containerID="712696118b476741f666be678ff354cfea9a5a8c35b8b0819498d9fa6ce88965" exitCode=0 Feb 18 20:08:47 crc kubenswrapper[4932]: I0218 20:08:47.630783 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4rc9w" event={"ID":"bc336435-b073-4c36-91f6-159485fd9213","Type":"ContainerDied","Data":"712696118b476741f666be678ff354cfea9a5a8c35b8b0819498d9fa6ce88965"} Feb 18 20:08:48 crc kubenswrapper[4932]: I0218 20:08:48.034451 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2vrps"] Feb 18 20:08:48 crc kubenswrapper[4932]: I0218 20:08:48.037471 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2vrps" Feb 18 20:08:48 crc kubenswrapper[4932]: I0218 20:08:48.048056 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2vrps"] Feb 18 20:08:48 crc kubenswrapper[4932]: I0218 20:08:48.090413 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30245549-b2f1-43f7-b45f-14f4ceb99f9f-catalog-content\") pod \"redhat-operators-2vrps\" (UID: \"30245549-b2f1-43f7-b45f-14f4ceb99f9f\") " pod="openshift-marketplace/redhat-operators-2vrps" Feb 18 20:08:48 crc kubenswrapper[4932]: I0218 20:08:48.090763 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxzdg\" (UniqueName: \"kubernetes.io/projected/30245549-b2f1-43f7-b45f-14f4ceb99f9f-kube-api-access-sxzdg\") pod \"redhat-operators-2vrps\" (UID: \"30245549-b2f1-43f7-b45f-14f4ceb99f9f\") " pod="openshift-marketplace/redhat-operators-2vrps" Feb 18 20:08:48 crc kubenswrapper[4932]: I0218 20:08:48.090984 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30245549-b2f1-43f7-b45f-14f4ceb99f9f-utilities\") pod \"redhat-operators-2vrps\" (UID: \"30245549-b2f1-43f7-b45f-14f4ceb99f9f\") " pod="openshift-marketplace/redhat-operators-2vrps" Feb 18 20:08:48 crc kubenswrapper[4932]: I0218 20:08:48.192971 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30245549-b2f1-43f7-b45f-14f4ceb99f9f-catalog-content\") pod \"redhat-operators-2vrps\" (UID: \"30245549-b2f1-43f7-b45f-14f4ceb99f9f\") " pod="openshift-marketplace/redhat-operators-2vrps" Feb 18 20:08:48 crc kubenswrapper[4932]: I0218 20:08:48.193037 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxzdg\" (UniqueName: \"kubernetes.io/projected/30245549-b2f1-43f7-b45f-14f4ceb99f9f-kube-api-access-sxzdg\") pod \"redhat-operators-2vrps\" (UID: \"30245549-b2f1-43f7-b45f-14f4ceb99f9f\") " pod="openshift-marketplace/redhat-operators-2vrps" Feb 18 20:08:48 crc kubenswrapper[4932]: I0218 20:08:48.193254 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30245549-b2f1-43f7-b45f-14f4ceb99f9f-utilities\") pod \"redhat-operators-2vrps\" (UID: \"30245549-b2f1-43f7-b45f-14f4ceb99f9f\") " pod="openshift-marketplace/redhat-operators-2vrps" Feb 18 20:08:48 crc kubenswrapper[4932]: I0218 20:08:48.193945 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30245549-b2f1-43f7-b45f-14f4ceb99f9f-catalog-content\") pod \"redhat-operators-2vrps\" (UID: \"30245549-b2f1-43f7-b45f-14f4ceb99f9f\") " pod="openshift-marketplace/redhat-operators-2vrps" Feb 18 20:08:48 crc kubenswrapper[4932]: I0218 20:08:48.194133 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30245549-b2f1-43f7-b45f-14f4ceb99f9f-utilities\") pod \"redhat-operators-2vrps\" (UID: \"30245549-b2f1-43f7-b45f-14f4ceb99f9f\") " pod="openshift-marketplace/redhat-operators-2vrps" Feb 18 20:08:48 crc kubenswrapper[4932]: I0218 20:08:48.224567 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxzdg\" (UniqueName: \"kubernetes.io/projected/30245549-b2f1-43f7-b45f-14f4ceb99f9f-kube-api-access-sxzdg\") pod \"redhat-operators-2vrps\" (UID: \"30245549-b2f1-43f7-b45f-14f4ceb99f9f\") " pod="openshift-marketplace/redhat-operators-2vrps" Feb 18 20:08:48 crc kubenswrapper[4932]: I0218 20:08:48.363268 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2vrps" Feb 18 20:08:48 crc kubenswrapper[4932]: I0218 20:08:48.659490 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4rc9w" event={"ID":"bc336435-b073-4c36-91f6-159485fd9213","Type":"ContainerStarted","Data":"6c4f1e7259049634f1dfb3427181a8de4407df8fd73d14b02942345788f6a4d6"} Feb 18 20:08:48 crc kubenswrapper[4932]: I0218 20:08:48.689693 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4rc9w" podStartSLOduration=2.203465801 podStartE2EDuration="4.689674054s" podCreationTimestamp="2026-02-18 20:08:44 +0000 UTC" firstStartedPulling="2026-02-18 20:08:45.609344066 +0000 UTC m=+2089.191298911" lastFinishedPulling="2026-02-18 20:08:48.095552319 +0000 UTC m=+2091.677507164" observedRunningTime="2026-02-18 20:08:48.67821005 +0000 UTC m=+2092.260164915" watchObservedRunningTime="2026-02-18 20:08:48.689674054 +0000 UTC m=+2092.271628899" Feb 18 20:08:48 crc kubenswrapper[4932]: W0218 20:08:48.877531 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30245549_b2f1_43f7_b45f_14f4ceb99f9f.slice/crio-354e5d7372bf02f4d4f46ecddcd2781b941e1d503955208f5ff88ae68b99f044 WatchSource:0}: Error finding container 354e5d7372bf02f4d4f46ecddcd2781b941e1d503955208f5ff88ae68b99f044: Status 404 returned error can't find the container with id 354e5d7372bf02f4d4f46ecddcd2781b941e1d503955208f5ff88ae68b99f044 Feb 18 20:08:48 crc kubenswrapper[4932]: I0218 20:08:48.879910 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2vrps"] Feb 18 20:08:49 crc kubenswrapper[4932]: I0218 20:08:49.668934 4932 generic.go:334] "Generic (PLEG): container finished" podID="30245549-b2f1-43f7-b45f-14f4ceb99f9f" containerID="373a51a55fb1b77e863638aedbe096088d426dbdf3dc2be6c3d9df0c80351c27" exitCode=0 Feb 18 20:08:49 crc kubenswrapper[4932]: I0218 20:08:49.670466 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vrps" event={"ID":"30245549-b2f1-43f7-b45f-14f4ceb99f9f","Type":"ContainerDied","Data":"373a51a55fb1b77e863638aedbe096088d426dbdf3dc2be6c3d9df0c80351c27"} Feb 18 20:08:49 crc kubenswrapper[4932]: I0218 20:08:49.670488 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vrps" event={"ID":"30245549-b2f1-43f7-b45f-14f4ceb99f9f","Type":"ContainerStarted","Data":"354e5d7372bf02f4d4f46ecddcd2781b941e1d503955208f5ff88ae68b99f044"} Feb 18 20:08:50 crc kubenswrapper[4932]: I0218 20:08:50.678469 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vrps" event={"ID":"30245549-b2f1-43f7-b45f-14f4ceb99f9f","Type":"ContainerStarted","Data":"8654c7c9614d1756c519ed99b4ecc9ce014d8049c23dfc4e8332b059a5e7cd59"} Feb 18 20:08:53 crc kubenswrapper[4932]: I0218 20:08:53.706262 4932 generic.go:334] "Generic (PLEG): container finished" podID="30245549-b2f1-43f7-b45f-14f4ceb99f9f" containerID="8654c7c9614d1756c519ed99b4ecc9ce014d8049c23dfc4e8332b059a5e7cd59" exitCode=0 Feb 18 20:08:53 crc kubenswrapper[4932]: I0218 20:08:53.706326 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vrps" event={"ID":"30245549-b2f1-43f7-b45f-14f4ceb99f9f","Type":"ContainerDied","Data":"8654c7c9614d1756c519ed99b4ecc9ce014d8049c23dfc4e8332b059a5e7cd59"} Feb 18 20:08:54 crc kubenswrapper[4932]: I0218 20:08:54.402939 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4rc9w" Feb 18 20:08:54 crc kubenswrapper[4932]: I0218 20:08:54.402991 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4rc9w" Feb 18 20:08:54 crc kubenswrapper[4932]: I0218 20:08:54.446822 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4rc9w" Feb 18 20:08:54 crc kubenswrapper[4932]: I0218 20:08:54.764124 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4rc9w" Feb 18 20:08:55 crc kubenswrapper[4932]: I0218 20:08:55.725888 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vrps" event={"ID":"30245549-b2f1-43f7-b45f-14f4ceb99f9f","Type":"ContainerStarted","Data":"4ba0256f4ebc76d751d0d6af52c519b609a5c4e120f804ba19b650e366561970"} Feb 18 20:08:55 crc kubenswrapper[4932]: I0218 20:08:55.750725 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2vrps" podStartSLOduration=2.22888201 podStartE2EDuration="7.750703986s" podCreationTimestamp="2026-02-18 20:08:48 +0000 UTC" firstStartedPulling="2026-02-18 20:08:49.671488995 +0000 UTC m=+2093.253443840" lastFinishedPulling="2026-02-18 20:08:55.193310971 +0000 UTC m=+2098.775265816" observedRunningTime="2026-02-18 20:08:55.744580105 +0000 UTC m=+2099.326534950" watchObservedRunningTime="2026-02-18 20:08:55.750703986 +0000 UTC m=+2099.332658831" Feb 18 20:08:56 crc kubenswrapper[4932]: I0218 20:08:56.832556 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4rc9w"] Feb 18 20:08:56 crc kubenswrapper[4932]: I0218 20:08:56.833130 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4rc9w" podUID="bc336435-b073-4c36-91f6-159485fd9213" containerName="registry-server" containerID="cri-o://6c4f1e7259049634f1dfb3427181a8de4407df8fd73d14b02942345788f6a4d6" gracePeriod=2 Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.343017 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4rc9w" Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.392365 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc336435-b073-4c36-91f6-159485fd9213-utilities\") pod \"bc336435-b073-4c36-91f6-159485fd9213\" (UID: \"bc336435-b073-4c36-91f6-159485fd9213\") " Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.392474 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc336435-b073-4c36-91f6-159485fd9213-catalog-content\") pod \"bc336435-b073-4c36-91f6-159485fd9213\" (UID: \"bc336435-b073-4c36-91f6-159485fd9213\") " Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.392675 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86lx2\" (UniqueName: \"kubernetes.io/projected/bc336435-b073-4c36-91f6-159485fd9213-kube-api-access-86lx2\") pod \"bc336435-b073-4c36-91f6-159485fd9213\" (UID: \"bc336435-b073-4c36-91f6-159485fd9213\") " Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.393355 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc336435-b073-4c36-91f6-159485fd9213-utilities" (OuterVolumeSpecName: "utilities") pod "bc336435-b073-4c36-91f6-159485fd9213" (UID: "bc336435-b073-4c36-91f6-159485fd9213"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.398475 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc336435-b073-4c36-91f6-159485fd9213-kube-api-access-86lx2" (OuterVolumeSpecName: "kube-api-access-86lx2") pod "bc336435-b073-4c36-91f6-159485fd9213" (UID: "bc336435-b073-4c36-91f6-159485fd9213"). InnerVolumeSpecName "kube-api-access-86lx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.451876 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc336435-b073-4c36-91f6-159485fd9213-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc336435-b073-4c36-91f6-159485fd9213" (UID: "bc336435-b073-4c36-91f6-159485fd9213"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.494843 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86lx2\" (UniqueName: \"kubernetes.io/projected/bc336435-b073-4c36-91f6-159485fd9213-kube-api-access-86lx2\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.495196 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc336435-b073-4c36-91f6-159485fd9213-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.495212 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc336435-b073-4c36-91f6-159485fd9213-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.752230 4932 generic.go:334] "Generic (PLEG): container finished" podID="bc336435-b073-4c36-91f6-159485fd9213" containerID="6c4f1e7259049634f1dfb3427181a8de4407df8fd73d14b02942345788f6a4d6" exitCode=0 Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.752292 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4rc9w" Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.752299 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4rc9w" event={"ID":"bc336435-b073-4c36-91f6-159485fd9213","Type":"ContainerDied","Data":"6c4f1e7259049634f1dfb3427181a8de4407df8fd73d14b02942345788f6a4d6"} Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.752459 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4rc9w" event={"ID":"bc336435-b073-4c36-91f6-159485fd9213","Type":"ContainerDied","Data":"b2082728b78a745b64c12b43c60f4a9ceb4e0b21b7b5e961de2814b0437eb84e"} Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.752504 4932 scope.go:117] "RemoveContainer" containerID="6c4f1e7259049634f1dfb3427181a8de4407df8fd73d14b02942345788f6a4d6" Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.782813 4932 scope.go:117] "RemoveContainer" containerID="712696118b476741f666be678ff354cfea9a5a8c35b8b0819498d9fa6ce88965" Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.795110 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4rc9w"] Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.804160 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4rc9w"] Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.822298 4932 scope.go:117] "RemoveContainer" containerID="e99c9e5505be48687e8d1ca9827e00f5b76f46976caf77f82360579460ae28b2" Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.872203 4932 scope.go:117] "RemoveContainer" containerID="6c4f1e7259049634f1dfb3427181a8de4407df8fd73d14b02942345788f6a4d6" Feb 18 20:08:57 crc kubenswrapper[4932]: E0218 20:08:57.872743 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c4f1e7259049634f1dfb3427181a8de4407df8fd73d14b02942345788f6a4d6\": container with ID starting with 6c4f1e7259049634f1dfb3427181a8de4407df8fd73d14b02942345788f6a4d6 not found: ID does not exist" containerID="6c4f1e7259049634f1dfb3427181a8de4407df8fd73d14b02942345788f6a4d6" Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.872774 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c4f1e7259049634f1dfb3427181a8de4407df8fd73d14b02942345788f6a4d6"} err="failed to get container status \"6c4f1e7259049634f1dfb3427181a8de4407df8fd73d14b02942345788f6a4d6\": rpc error: code = NotFound desc = could not find container \"6c4f1e7259049634f1dfb3427181a8de4407df8fd73d14b02942345788f6a4d6\": container with ID starting with 6c4f1e7259049634f1dfb3427181a8de4407df8fd73d14b02942345788f6a4d6 not found: ID does not exist" Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.872797 4932 scope.go:117] "RemoveContainer" containerID="712696118b476741f666be678ff354cfea9a5a8c35b8b0819498d9fa6ce88965" Feb 18 20:08:57 crc kubenswrapper[4932]: E0218 20:08:57.873126 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"712696118b476741f666be678ff354cfea9a5a8c35b8b0819498d9fa6ce88965\": container with ID starting with 712696118b476741f666be678ff354cfea9a5a8c35b8b0819498d9fa6ce88965 not found: ID does not exist" containerID="712696118b476741f666be678ff354cfea9a5a8c35b8b0819498d9fa6ce88965" Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.873150 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"712696118b476741f666be678ff354cfea9a5a8c35b8b0819498d9fa6ce88965"} err="failed to get container status \"712696118b476741f666be678ff354cfea9a5a8c35b8b0819498d9fa6ce88965\": rpc error: code = NotFound desc = could not find container \"712696118b476741f666be678ff354cfea9a5a8c35b8b0819498d9fa6ce88965\": container with ID starting with 712696118b476741f666be678ff354cfea9a5a8c35b8b0819498d9fa6ce88965 not found: ID does not exist" Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.873164 4932 scope.go:117] "RemoveContainer" containerID="e99c9e5505be48687e8d1ca9827e00f5b76f46976caf77f82360579460ae28b2" Feb 18 20:08:57 crc kubenswrapper[4932]: E0218 20:08:57.873447 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e99c9e5505be48687e8d1ca9827e00f5b76f46976caf77f82360579460ae28b2\": container with ID starting with e99c9e5505be48687e8d1ca9827e00f5b76f46976caf77f82360579460ae28b2 not found: ID does not exist" containerID="e99c9e5505be48687e8d1ca9827e00f5b76f46976caf77f82360579460ae28b2" Feb 18 20:08:57 crc kubenswrapper[4932]: I0218 20:08:57.873478 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e99c9e5505be48687e8d1ca9827e00f5b76f46976caf77f82360579460ae28b2"} err="failed to get container status \"e99c9e5505be48687e8d1ca9827e00f5b76f46976caf77f82360579460ae28b2\": rpc error: code = NotFound desc = could not find container \"e99c9e5505be48687e8d1ca9827e00f5b76f46976caf77f82360579460ae28b2\": container with ID starting with e99c9e5505be48687e8d1ca9827e00f5b76f46976caf77f82360579460ae28b2 not found: ID does not exist" Feb 18 20:08:58 crc kubenswrapper[4932]: I0218 20:08:58.363591 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2vrps" Feb 18 20:08:58 crc kubenswrapper[4932]: I0218 20:08:58.363639 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2vrps" Feb 18 20:08:59 crc kubenswrapper[4932]: I0218 20:08:59.193070 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc336435-b073-4c36-91f6-159485fd9213" path="/var/lib/kubelet/pods/bc336435-b073-4c36-91f6-159485fd9213/volumes" Feb 18 20:08:59 crc kubenswrapper[4932]: I0218 20:08:59.426984 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2vrps" podUID="30245549-b2f1-43f7-b45f-14f4ceb99f9f" containerName="registry-server" probeResult="failure" output=< Feb 18 20:08:59 crc kubenswrapper[4932]: timeout: failed to connect service ":50051" within 1s Feb 18 20:08:59 crc kubenswrapper[4932]: > Feb 18 20:09:08 crc kubenswrapper[4932]: I0218 20:09:08.417908 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2vrps" Feb 18 20:09:08 crc kubenswrapper[4932]: I0218 20:09:08.484544 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2vrps" Feb 18 20:09:08 crc kubenswrapper[4932]: I0218 20:09:08.661587 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2vrps"] Feb 18 20:09:09 crc kubenswrapper[4932]: I0218 20:09:09.870187 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2vrps" podUID="30245549-b2f1-43f7-b45f-14f4ceb99f9f" containerName="registry-server" containerID="cri-o://4ba0256f4ebc76d751d0d6af52c519b609a5c4e120f804ba19b650e366561970" gracePeriod=2 Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.388141 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2vrps" Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.460710 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30245549-b2f1-43f7-b45f-14f4ceb99f9f-utilities\") pod \"30245549-b2f1-43f7-b45f-14f4ceb99f9f\" (UID: \"30245549-b2f1-43f7-b45f-14f4ceb99f9f\") " Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.460909 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30245549-b2f1-43f7-b45f-14f4ceb99f9f-catalog-content\") pod \"30245549-b2f1-43f7-b45f-14f4ceb99f9f\" (UID: \"30245549-b2f1-43f7-b45f-14f4ceb99f9f\") " Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.461014 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxzdg\" (UniqueName: \"kubernetes.io/projected/30245549-b2f1-43f7-b45f-14f4ceb99f9f-kube-api-access-sxzdg\") pod \"30245549-b2f1-43f7-b45f-14f4ceb99f9f\" (UID: \"30245549-b2f1-43f7-b45f-14f4ceb99f9f\") " Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.462311 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30245549-b2f1-43f7-b45f-14f4ceb99f9f-utilities" (OuterVolumeSpecName: "utilities") pod "30245549-b2f1-43f7-b45f-14f4ceb99f9f" (UID: "30245549-b2f1-43f7-b45f-14f4ceb99f9f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.471604 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30245549-b2f1-43f7-b45f-14f4ceb99f9f-kube-api-access-sxzdg" (OuterVolumeSpecName: "kube-api-access-sxzdg") pod "30245549-b2f1-43f7-b45f-14f4ceb99f9f" (UID: "30245549-b2f1-43f7-b45f-14f4ceb99f9f"). InnerVolumeSpecName "kube-api-access-sxzdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.564329 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30245549-b2f1-43f7-b45f-14f4ceb99f9f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.564723 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxzdg\" (UniqueName: \"kubernetes.io/projected/30245549-b2f1-43f7-b45f-14f4ceb99f9f-kube-api-access-sxzdg\") on node \"crc\" DevicePath \"\"" Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.608910 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30245549-b2f1-43f7-b45f-14f4ceb99f9f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "30245549-b2f1-43f7-b45f-14f4ceb99f9f" (UID: "30245549-b2f1-43f7-b45f-14f4ceb99f9f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.665679 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30245549-b2f1-43f7-b45f-14f4ceb99f9f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.879351 4932 generic.go:334] "Generic (PLEG): container finished" podID="30245549-b2f1-43f7-b45f-14f4ceb99f9f" containerID="4ba0256f4ebc76d751d0d6af52c519b609a5c4e120f804ba19b650e366561970" exitCode=0 Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.879394 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vrps" event={"ID":"30245549-b2f1-43f7-b45f-14f4ceb99f9f","Type":"ContainerDied","Data":"4ba0256f4ebc76d751d0d6af52c519b609a5c4e120f804ba19b650e366561970"} Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.879420 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vrps" event={"ID":"30245549-b2f1-43f7-b45f-14f4ceb99f9f","Type":"ContainerDied","Data":"354e5d7372bf02f4d4f46ecddcd2781b941e1d503955208f5ff88ae68b99f044"} Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.879434 4932 scope.go:117] "RemoveContainer" containerID="4ba0256f4ebc76d751d0d6af52c519b609a5c4e120f804ba19b650e366561970" Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.879550 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2vrps" Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.922551 4932 scope.go:117] "RemoveContainer" containerID="8654c7c9614d1756c519ed99b4ecc9ce014d8049c23dfc4e8332b059a5e7cd59" Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.925782 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2vrps"] Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.936783 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2vrps"] Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.944701 4932 scope.go:117] "RemoveContainer" containerID="373a51a55fb1b77e863638aedbe096088d426dbdf3dc2be6c3d9df0c80351c27" Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.997837 4932 scope.go:117] "RemoveContainer" containerID="4ba0256f4ebc76d751d0d6af52c519b609a5c4e120f804ba19b650e366561970" Feb 18 20:09:10 crc kubenswrapper[4932]: E0218 20:09:10.998423 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ba0256f4ebc76d751d0d6af52c519b609a5c4e120f804ba19b650e366561970\": container with ID starting with 4ba0256f4ebc76d751d0d6af52c519b609a5c4e120f804ba19b650e366561970 not found: ID does not exist" containerID="4ba0256f4ebc76d751d0d6af52c519b609a5c4e120f804ba19b650e366561970" Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.998497 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ba0256f4ebc76d751d0d6af52c519b609a5c4e120f804ba19b650e366561970"} err="failed to get container status \"4ba0256f4ebc76d751d0d6af52c519b609a5c4e120f804ba19b650e366561970\": rpc error: code = NotFound desc = could not find container \"4ba0256f4ebc76d751d0d6af52c519b609a5c4e120f804ba19b650e366561970\": container with ID starting with 4ba0256f4ebc76d751d0d6af52c519b609a5c4e120f804ba19b650e366561970 not found: ID does not exist" Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.998539 4932 scope.go:117] "RemoveContainer" containerID="8654c7c9614d1756c519ed99b4ecc9ce014d8049c23dfc4e8332b059a5e7cd59" Feb 18 20:09:10 crc kubenswrapper[4932]: E0218 20:09:10.998968 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8654c7c9614d1756c519ed99b4ecc9ce014d8049c23dfc4e8332b059a5e7cd59\": container with ID starting with 8654c7c9614d1756c519ed99b4ecc9ce014d8049c23dfc4e8332b059a5e7cd59 not found: ID does not exist" containerID="8654c7c9614d1756c519ed99b4ecc9ce014d8049c23dfc4e8332b059a5e7cd59" Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.999023 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8654c7c9614d1756c519ed99b4ecc9ce014d8049c23dfc4e8332b059a5e7cd59"} err="failed to get container status \"8654c7c9614d1756c519ed99b4ecc9ce014d8049c23dfc4e8332b059a5e7cd59\": rpc error: code = NotFound desc = could not find container \"8654c7c9614d1756c519ed99b4ecc9ce014d8049c23dfc4e8332b059a5e7cd59\": container with ID starting with 8654c7c9614d1756c519ed99b4ecc9ce014d8049c23dfc4e8332b059a5e7cd59 not found: ID does not exist" Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.999061 4932 scope.go:117] "RemoveContainer" containerID="373a51a55fb1b77e863638aedbe096088d426dbdf3dc2be6c3d9df0c80351c27" Feb 18 20:09:10 crc kubenswrapper[4932]: E0218 20:09:10.999408 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"373a51a55fb1b77e863638aedbe096088d426dbdf3dc2be6c3d9df0c80351c27\": container with ID starting with 373a51a55fb1b77e863638aedbe096088d426dbdf3dc2be6c3d9df0c80351c27 not found: ID does not exist" containerID="373a51a55fb1b77e863638aedbe096088d426dbdf3dc2be6c3d9df0c80351c27" Feb 18 20:09:10 crc kubenswrapper[4932]: I0218 20:09:10.999450 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"373a51a55fb1b77e863638aedbe096088d426dbdf3dc2be6c3d9df0c80351c27"} err="failed to get container status \"373a51a55fb1b77e863638aedbe096088d426dbdf3dc2be6c3d9df0c80351c27\": rpc error: code = NotFound desc = could not find container \"373a51a55fb1b77e863638aedbe096088d426dbdf3dc2be6c3d9df0c80351c27\": container with ID starting with 373a51a55fb1b77e863638aedbe096088d426dbdf3dc2be6c3d9df0c80351c27 not found: ID does not exist" Feb 18 20:09:11 crc kubenswrapper[4932]: I0218 20:09:11.190221 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30245549-b2f1-43f7-b45f-14f4ceb99f9f" path="/var/lib/kubelet/pods/30245549-b2f1-43f7-b45f-14f4ceb99f9f/volumes" Feb 18 20:09:29 crc kubenswrapper[4932]: I0218 20:09:29.049668 4932 generic.go:334] "Generic (PLEG): container finished" podID="9c4aa436-f356-454c-b810-66e7cffe0c32" containerID="7bfce2eae3e52d734bba86da5ba5caa23d72727cfec425013bf472191821e900" exitCode=0 Feb 18 20:09:29 crc kubenswrapper[4932]: I0218 20:09:29.049785 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" event={"ID":"9c4aa436-f356-454c-b810-66e7cffe0c32","Type":"ContainerDied","Data":"7bfce2eae3e52d734bba86da5ba5caa23d72727cfec425013bf472191821e900"} Feb 18 20:09:30 crc kubenswrapper[4932]: I0218 20:09:30.532778 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:09:30 crc kubenswrapper[4932]: I0218 20:09:30.691920 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p667\" (UniqueName: \"kubernetes.io/projected/9c4aa436-f356-454c-b810-66e7cffe0c32-kube-api-access-8p667\") pod \"9c4aa436-f356-454c-b810-66e7cffe0c32\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " Feb 18 20:09:30 crc kubenswrapper[4932]: I0218 20:09:30.692062 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-ovn-combined-ca-bundle\") pod \"9c4aa436-f356-454c-b810-66e7cffe0c32\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " Feb 18 20:09:30 crc kubenswrapper[4932]: I0218 20:09:30.692088 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9c4aa436-f356-454c-b810-66e7cffe0c32-ovncontroller-config-0\") pod \"9c4aa436-f356-454c-b810-66e7cffe0c32\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " Feb 18 20:09:30 crc kubenswrapper[4932]: I0218 20:09:30.692261 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-ssh-key-openstack-edpm-ipam\") pod \"9c4aa436-f356-454c-b810-66e7cffe0c32\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " Feb 18 20:09:30 crc kubenswrapper[4932]: I0218 20:09:30.692294 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-inventory\") pod \"9c4aa436-f356-454c-b810-66e7cffe0c32\" (UID: \"9c4aa436-f356-454c-b810-66e7cffe0c32\") " Feb 18 20:09:30 crc kubenswrapper[4932]: I0218 20:09:30.697285 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "9c4aa436-f356-454c-b810-66e7cffe0c32" (UID: "9c4aa436-f356-454c-b810-66e7cffe0c32"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:09:30 crc kubenswrapper[4932]: I0218 20:09:30.698441 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c4aa436-f356-454c-b810-66e7cffe0c32-kube-api-access-8p667" (OuterVolumeSpecName: "kube-api-access-8p667") pod "9c4aa436-f356-454c-b810-66e7cffe0c32" (UID: "9c4aa436-f356-454c-b810-66e7cffe0c32"). InnerVolumeSpecName "kube-api-access-8p667". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:09:30 crc kubenswrapper[4932]: I0218 20:09:30.722759 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c4aa436-f356-454c-b810-66e7cffe0c32-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "9c4aa436-f356-454c-b810-66e7cffe0c32" (UID: "9c4aa436-f356-454c-b810-66e7cffe0c32"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:09:30 crc kubenswrapper[4932]: I0218 20:09:30.725863 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-inventory" (OuterVolumeSpecName: "inventory") pod "9c4aa436-f356-454c-b810-66e7cffe0c32" (UID: "9c4aa436-f356-454c-b810-66e7cffe0c32"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:09:30 crc kubenswrapper[4932]: I0218 20:09:30.726309 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9c4aa436-f356-454c-b810-66e7cffe0c32" (UID: "9c4aa436-f356-454c-b810-66e7cffe0c32"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:09:30 crc kubenswrapper[4932]: I0218 20:09:30.795282 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:09:30 crc kubenswrapper[4932]: I0218 20:09:30.795339 4932 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:09:30 crc kubenswrapper[4932]: I0218 20:09:30.795359 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8p667\" (UniqueName: \"kubernetes.io/projected/9c4aa436-f356-454c-b810-66e7cffe0c32-kube-api-access-8p667\") on node \"crc\" DevicePath \"\"" Feb 18 20:09:30 crc kubenswrapper[4932]: I0218 20:09:30.795377 4932 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c4aa436-f356-454c-b810-66e7cffe0c32-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:09:30 crc kubenswrapper[4932]: I0218 20:09:30.795395 4932 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9c4aa436-f356-454c-b810-66e7cffe0c32-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.076032 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" event={"ID":"9c4aa436-f356-454c-b810-66e7cffe0c32","Type":"ContainerDied","Data":"1caca805311938cfc09992bdd1861fae6f71210dd49992178060515fb60b5a42"} Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.076071 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1caca805311938cfc09992bdd1861fae6f71210dd49992178060515fb60b5a42" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.076092 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wp962" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.175945 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj"] Feb 18 20:09:31 crc kubenswrapper[4932]: E0218 20:09:31.176728 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c4aa436-f356-454c-b810-66e7cffe0c32" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.176751 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c4aa436-f356-454c-b810-66e7cffe0c32" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 20:09:31 crc kubenswrapper[4932]: E0218 20:09:31.176770 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30245549-b2f1-43f7-b45f-14f4ceb99f9f" containerName="registry-server" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.176778 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="30245549-b2f1-43f7-b45f-14f4ceb99f9f" containerName="registry-server" Feb 18 20:09:31 crc kubenswrapper[4932]: E0218 20:09:31.176790 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30245549-b2f1-43f7-b45f-14f4ceb99f9f" containerName="extract-content" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.176797 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="30245549-b2f1-43f7-b45f-14f4ceb99f9f" containerName="extract-content" Feb 18 20:09:31 crc kubenswrapper[4932]: E0218 20:09:31.176820 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc336435-b073-4c36-91f6-159485fd9213" containerName="extract-content" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.176827 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc336435-b073-4c36-91f6-159485fd9213" containerName="extract-content" Feb 18 20:09:31 crc kubenswrapper[4932]: E0218 20:09:31.176846 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30245549-b2f1-43f7-b45f-14f4ceb99f9f" containerName="extract-utilities" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.176854 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="30245549-b2f1-43f7-b45f-14f4ceb99f9f" containerName="extract-utilities" Feb 18 20:09:31 crc kubenswrapper[4932]: E0218 20:09:31.176874 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc336435-b073-4c36-91f6-159485fd9213" containerName="registry-server" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.176882 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc336435-b073-4c36-91f6-159485fd9213" containerName="registry-server" Feb 18 20:09:31 crc kubenswrapper[4932]: E0218 20:09:31.176896 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc336435-b073-4c36-91f6-159485fd9213" containerName="extract-utilities" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.176903 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc336435-b073-4c36-91f6-159485fd9213" containerName="extract-utilities" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.177116 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="30245549-b2f1-43f7-b45f-14f4ceb99f9f" containerName="registry-server" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.177135 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc336435-b073-4c36-91f6-159485fd9213" containerName="registry-server" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.177150 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c4aa436-f356-454c-b810-66e7cffe0c32" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.177900 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.180022 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.180083 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vjvmw" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.180208 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.181943 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.181950 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.182102 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.194937 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj"] Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.308240 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.308291 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.308485 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.308645 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d46t8\" (UniqueName: \"kubernetes.io/projected/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-kube-api-access-d46t8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.308917 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.308994 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.410387 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.410457 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.410528 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.410568 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.410656 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.410703 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d46t8\" (UniqueName: \"kubernetes.io/projected/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-kube-api-access-d46t8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.414731 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.415143 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.416519 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.416972 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.425515 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.429025 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d46t8\" (UniqueName: \"kubernetes.io/projected/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-kube-api-access-d46t8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:31 crc kubenswrapper[4932]: I0218 20:09:31.503777 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:09:32 crc kubenswrapper[4932]: I0218 20:09:32.149528 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj"] Feb 18 20:09:33 crc kubenswrapper[4932]: I0218 20:09:33.100914 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" event={"ID":"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c","Type":"ContainerStarted","Data":"755fcb39e1b91cd8b987a4bfa68473f8016f747f8c5758a79f4905e1b2df2117"} Feb 18 20:09:33 crc kubenswrapper[4932]: I0218 20:09:33.101185 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" event={"ID":"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c","Type":"ContainerStarted","Data":"286150e5518249c2629c065fb8f36ccd7d32311c5a5490e9bc7e7cbcc0367a3d"} Feb 18 20:09:33 crc kubenswrapper[4932]: I0218 20:09:33.124097 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" podStartSLOduration=1.547606913 podStartE2EDuration="2.124075829s" podCreationTimestamp="2026-02-18 20:09:31 +0000 UTC" firstStartedPulling="2026-02-18 20:09:32.167925114 +0000 UTC m=+2135.749879959" lastFinishedPulling="2026-02-18 20:09:32.74439403 +0000 UTC m=+2136.326348875" observedRunningTime="2026-02-18 20:09:33.123588327 +0000 UTC m=+2136.705543172" watchObservedRunningTime="2026-02-18 20:09:33.124075829 +0000 UTC m=+2136.706030674" Feb 18 20:09:56 crc kubenswrapper[4932]: I0218 20:09:56.968001 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hwcb8"] Feb 18 20:09:56 crc kubenswrapper[4932]: I0218 20:09:56.970591 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hwcb8" Feb 18 20:09:56 crc kubenswrapper[4932]: I0218 20:09:56.983807 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hwcb8"] Feb 18 20:09:57 crc kubenswrapper[4932]: I0218 20:09:57.170130 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d46440a6-e998-48df-a6ee-83e196dc6f97-utilities\") pod \"redhat-marketplace-hwcb8\" (UID: \"d46440a6-e998-48df-a6ee-83e196dc6f97\") " pod="openshift-marketplace/redhat-marketplace-hwcb8" Feb 18 20:09:57 crc kubenswrapper[4932]: I0218 20:09:57.170356 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slw5j\" (UniqueName: \"kubernetes.io/projected/d46440a6-e998-48df-a6ee-83e196dc6f97-kube-api-access-slw5j\") pod \"redhat-marketplace-hwcb8\" (UID: \"d46440a6-e998-48df-a6ee-83e196dc6f97\") " pod="openshift-marketplace/redhat-marketplace-hwcb8" Feb 18 20:09:57 crc kubenswrapper[4932]: I0218 20:09:57.170489 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d46440a6-e998-48df-a6ee-83e196dc6f97-catalog-content\") pod \"redhat-marketplace-hwcb8\" (UID: \"d46440a6-e998-48df-a6ee-83e196dc6f97\") " pod="openshift-marketplace/redhat-marketplace-hwcb8" Feb 18 20:09:57 crc kubenswrapper[4932]: I0218 20:09:57.272316 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d46440a6-e998-48df-a6ee-83e196dc6f97-utilities\") pod \"redhat-marketplace-hwcb8\" (UID: \"d46440a6-e998-48df-a6ee-83e196dc6f97\") " pod="openshift-marketplace/redhat-marketplace-hwcb8" Feb 18 20:09:57 crc kubenswrapper[4932]: I0218 20:09:57.272427 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slw5j\" (UniqueName: \"kubernetes.io/projected/d46440a6-e998-48df-a6ee-83e196dc6f97-kube-api-access-slw5j\") pod \"redhat-marketplace-hwcb8\" (UID: \"d46440a6-e998-48df-a6ee-83e196dc6f97\") " pod="openshift-marketplace/redhat-marketplace-hwcb8" Feb 18 20:09:57 crc kubenswrapper[4932]: I0218 20:09:57.272491 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d46440a6-e998-48df-a6ee-83e196dc6f97-catalog-content\") pod \"redhat-marketplace-hwcb8\" (UID: \"d46440a6-e998-48df-a6ee-83e196dc6f97\") " pod="openshift-marketplace/redhat-marketplace-hwcb8" Feb 18 20:09:57 crc kubenswrapper[4932]: I0218 20:09:57.273026 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d46440a6-e998-48df-a6ee-83e196dc6f97-utilities\") pod \"redhat-marketplace-hwcb8\" (UID: \"d46440a6-e998-48df-a6ee-83e196dc6f97\") " pod="openshift-marketplace/redhat-marketplace-hwcb8" Feb 18 20:09:57 crc kubenswrapper[4932]: I0218 20:09:57.273070 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d46440a6-e998-48df-a6ee-83e196dc6f97-catalog-content\") pod \"redhat-marketplace-hwcb8\" (UID: \"d46440a6-e998-48df-a6ee-83e196dc6f97\") " pod="openshift-marketplace/redhat-marketplace-hwcb8" Feb 18 20:09:57 crc kubenswrapper[4932]: I0218 20:09:57.304203 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slw5j\" (UniqueName: \"kubernetes.io/projected/d46440a6-e998-48df-a6ee-83e196dc6f97-kube-api-access-slw5j\") pod \"redhat-marketplace-hwcb8\" (UID: \"d46440a6-e998-48df-a6ee-83e196dc6f97\") " pod="openshift-marketplace/redhat-marketplace-hwcb8" Feb 18 20:09:57 crc kubenswrapper[4932]: I0218 20:09:57.590717 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hwcb8" Feb 18 20:09:58 crc kubenswrapper[4932]: I0218 20:09:58.094631 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hwcb8"] Feb 18 20:09:58 crc kubenswrapper[4932]: I0218 20:09:58.357060 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwcb8" event={"ID":"d46440a6-e998-48df-a6ee-83e196dc6f97","Type":"ContainerStarted","Data":"5e3ce083f25ec7c111ba25d38562a008bf0ce7b8f840b83df011400e166aad5a"} Feb 18 20:09:59 crc kubenswrapper[4932]: I0218 20:09:59.369959 4932 generic.go:334] "Generic (PLEG): container finished" podID="d46440a6-e998-48df-a6ee-83e196dc6f97" containerID="ceae045b7c2c872fa7d9d4bedd72b0329dffb4ff15e8c477344b3b13adecbd9f" exitCode=0 Feb 18 20:09:59 crc kubenswrapper[4932]: I0218 20:09:59.370015 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwcb8" event={"ID":"d46440a6-e998-48df-a6ee-83e196dc6f97","Type":"ContainerDied","Data":"ceae045b7c2c872fa7d9d4bedd72b0329dffb4ff15e8c477344b3b13adecbd9f"} Feb 18 20:10:00 crc kubenswrapper[4932]: I0218 20:10:00.384582 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwcb8" event={"ID":"d46440a6-e998-48df-a6ee-83e196dc6f97","Type":"ContainerStarted","Data":"dc22e969de30f320df29d8fb3300ead1ccce40e8fd0a68c631edab60b4aae345"} Feb 18 20:10:01 crc kubenswrapper[4932]: I0218 20:10:01.398882 4932 generic.go:334] "Generic (PLEG): container finished" podID="d46440a6-e998-48df-a6ee-83e196dc6f97" containerID="dc22e969de30f320df29d8fb3300ead1ccce40e8fd0a68c631edab60b4aae345" exitCode=0 Feb 18 20:10:01 crc kubenswrapper[4932]: I0218 20:10:01.398925 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwcb8" event={"ID":"d46440a6-e998-48df-a6ee-83e196dc6f97","Type":"ContainerDied","Data":"dc22e969de30f320df29d8fb3300ead1ccce40e8fd0a68c631edab60b4aae345"} Feb 18 20:10:02 crc kubenswrapper[4932]: I0218 20:10:02.415538 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwcb8" event={"ID":"d46440a6-e998-48df-a6ee-83e196dc6f97","Type":"ContainerStarted","Data":"d5e3d44b8c687a4ed0339e8d165e82f6353dbc670604a3f79e4367feb38ddc1a"} Feb 18 20:10:02 crc kubenswrapper[4932]: I0218 20:10:02.444106 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hwcb8" podStartSLOduration=3.966374052 podStartE2EDuration="6.444080615s" podCreationTimestamp="2026-02-18 20:09:56 +0000 UTC" firstStartedPulling="2026-02-18 20:09:59.374132765 +0000 UTC m=+2162.956087610" lastFinishedPulling="2026-02-18 20:10:01.851839328 +0000 UTC m=+2165.433794173" observedRunningTime="2026-02-18 20:10:02.432874577 +0000 UTC m=+2166.014829442" watchObservedRunningTime="2026-02-18 20:10:02.444080615 +0000 UTC m=+2166.026035470" Feb 18 20:10:07 crc kubenswrapper[4932]: I0218 20:10:07.591475 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hwcb8" Feb 18 20:10:07 crc kubenswrapper[4932]: I0218 20:10:07.592651 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hwcb8" Feb 18 20:10:07 crc kubenswrapper[4932]: I0218 20:10:07.658989 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hwcb8" Feb 18 20:10:08 crc kubenswrapper[4932]: I0218 20:10:08.525057 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hwcb8" Feb 18 20:10:08 crc kubenswrapper[4932]: I0218 20:10:08.575849 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hwcb8"] Feb 18 20:10:10 crc kubenswrapper[4932]: I0218 20:10:10.490566 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hwcb8" podUID="d46440a6-e998-48df-a6ee-83e196dc6f97" containerName="registry-server" containerID="cri-o://d5e3d44b8c687a4ed0339e8d165e82f6353dbc670604a3f79e4367feb38ddc1a" gracePeriod=2 Feb 18 20:10:11 crc kubenswrapper[4932]: I0218 20:10:11.504870 4932 generic.go:334] "Generic (PLEG): container finished" podID="d46440a6-e998-48df-a6ee-83e196dc6f97" containerID="d5e3d44b8c687a4ed0339e8d165e82f6353dbc670604a3f79e4367feb38ddc1a" exitCode=0 Feb 18 20:10:11 crc kubenswrapper[4932]: I0218 20:10:11.504970 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwcb8" event={"ID":"d46440a6-e998-48df-a6ee-83e196dc6f97","Type":"ContainerDied","Data":"d5e3d44b8c687a4ed0339e8d165e82f6353dbc670604a3f79e4367feb38ddc1a"} Feb 18 20:10:11 crc kubenswrapper[4932]: I0218 20:10:11.505392 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwcb8" event={"ID":"d46440a6-e998-48df-a6ee-83e196dc6f97","Type":"ContainerDied","Data":"5e3ce083f25ec7c111ba25d38562a008bf0ce7b8f840b83df011400e166aad5a"} Feb 18 20:10:11 crc kubenswrapper[4932]: I0218 20:10:11.505416 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e3ce083f25ec7c111ba25d38562a008bf0ce7b8f840b83df011400e166aad5a" Feb 18 20:10:11 crc kubenswrapper[4932]: I0218 20:10:11.508208 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hwcb8" Feb 18 20:10:11 crc kubenswrapper[4932]: I0218 20:10:11.698906 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slw5j\" (UniqueName: \"kubernetes.io/projected/d46440a6-e998-48df-a6ee-83e196dc6f97-kube-api-access-slw5j\") pod \"d46440a6-e998-48df-a6ee-83e196dc6f97\" (UID: \"d46440a6-e998-48df-a6ee-83e196dc6f97\") " Feb 18 20:10:11 crc kubenswrapper[4932]: I0218 20:10:11.699040 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d46440a6-e998-48df-a6ee-83e196dc6f97-utilities\") pod \"d46440a6-e998-48df-a6ee-83e196dc6f97\" (UID: \"d46440a6-e998-48df-a6ee-83e196dc6f97\") " Feb 18 20:10:11 crc kubenswrapper[4932]: I0218 20:10:11.699494 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d46440a6-e998-48df-a6ee-83e196dc6f97-catalog-content\") pod \"d46440a6-e998-48df-a6ee-83e196dc6f97\" (UID: \"d46440a6-e998-48df-a6ee-83e196dc6f97\") " Feb 18 20:10:11 crc kubenswrapper[4932]: I0218 20:10:11.700054 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d46440a6-e998-48df-a6ee-83e196dc6f97-utilities" (OuterVolumeSpecName: "utilities") pod "d46440a6-e998-48df-a6ee-83e196dc6f97" (UID: "d46440a6-e998-48df-a6ee-83e196dc6f97"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:10:11 crc kubenswrapper[4932]: I0218 20:10:11.701955 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d46440a6-e998-48df-a6ee-83e196dc6f97-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:11 crc kubenswrapper[4932]: I0218 20:10:11.705521 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d46440a6-e998-48df-a6ee-83e196dc6f97-kube-api-access-slw5j" (OuterVolumeSpecName: "kube-api-access-slw5j") pod "d46440a6-e998-48df-a6ee-83e196dc6f97" (UID: "d46440a6-e998-48df-a6ee-83e196dc6f97"). InnerVolumeSpecName "kube-api-access-slw5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:10:11 crc kubenswrapper[4932]: I0218 20:10:11.726437 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d46440a6-e998-48df-a6ee-83e196dc6f97-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d46440a6-e998-48df-a6ee-83e196dc6f97" (UID: "d46440a6-e998-48df-a6ee-83e196dc6f97"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:10:11 crc kubenswrapper[4932]: I0218 20:10:11.803743 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d46440a6-e998-48df-a6ee-83e196dc6f97-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:11 crc kubenswrapper[4932]: I0218 20:10:11.803796 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slw5j\" (UniqueName: \"kubernetes.io/projected/d46440a6-e998-48df-a6ee-83e196dc6f97-kube-api-access-slw5j\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:12 crc kubenswrapper[4932]: I0218 20:10:12.527126 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hwcb8" Feb 18 20:10:12 crc kubenswrapper[4932]: I0218 20:10:12.574429 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hwcb8"] Feb 18 20:10:12 crc kubenswrapper[4932]: I0218 20:10:12.583916 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hwcb8"] Feb 18 20:10:13 crc kubenswrapper[4932]: I0218 20:10:13.200954 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d46440a6-e998-48df-a6ee-83e196dc6f97" path="/var/lib/kubelet/pods/d46440a6-e998-48df-a6ee-83e196dc6f97/volumes" Feb 18 20:10:20 crc kubenswrapper[4932]: I0218 20:10:20.617394 4932 generic.go:334] "Generic (PLEG): container finished" podID="c71e78bd-5a3a-437b-8ca4-4fbebf52d75c" containerID="755fcb39e1b91cd8b987a4bfa68473f8016f747f8c5758a79f4905e1b2df2117" exitCode=0 Feb 18 20:10:20 crc kubenswrapper[4932]: I0218 20:10:20.617532 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" event={"ID":"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c","Type":"ContainerDied","Data":"755fcb39e1b91cd8b987a4bfa68473f8016f747f8c5758a79f4905e1b2df2117"} Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.172708 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.323425 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-nova-metadata-neutron-config-0\") pod \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.323715 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.323864 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d46t8\" (UniqueName: \"kubernetes.io/projected/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-kube-api-access-d46t8\") pod \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.324097 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-inventory\") pod \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.324265 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-ssh-key-openstack-edpm-ipam\") pod \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.324454 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-neutron-metadata-combined-ca-bundle\") pod \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\" (UID: \"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c\") " Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.329770 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-kube-api-access-d46t8" (OuterVolumeSpecName: "kube-api-access-d46t8") pod "c71e78bd-5a3a-437b-8ca4-4fbebf52d75c" (UID: "c71e78bd-5a3a-437b-8ca4-4fbebf52d75c"). InnerVolumeSpecName "kube-api-access-d46t8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.336447 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "c71e78bd-5a3a-437b-8ca4-4fbebf52d75c" (UID: "c71e78bd-5a3a-437b-8ca4-4fbebf52d75c"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.353644 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-inventory" (OuterVolumeSpecName: "inventory") pod "c71e78bd-5a3a-437b-8ca4-4fbebf52d75c" (UID: "c71e78bd-5a3a-437b-8ca4-4fbebf52d75c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.369140 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "c71e78bd-5a3a-437b-8ca4-4fbebf52d75c" (UID: "c71e78bd-5a3a-437b-8ca4-4fbebf52d75c"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.373082 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "c71e78bd-5a3a-437b-8ca4-4fbebf52d75c" (UID: "c71e78bd-5a3a-437b-8ca4-4fbebf52d75c"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.378634 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c71e78bd-5a3a-437b-8ca4-4fbebf52d75c" (UID: "c71e78bd-5a3a-437b-8ca4-4fbebf52d75c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.428675 4932 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.428753 4932 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.428789 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d46t8\" (UniqueName: \"kubernetes.io/projected/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-kube-api-access-d46t8\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.428817 4932 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.428841 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.428867 4932 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c71e78bd-5a3a-437b-8ca4-4fbebf52d75c-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.640805 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.640791 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tpdjj" event={"ID":"c71e78bd-5a3a-437b-8ca4-4fbebf52d75c","Type":"ContainerDied","Data":"286150e5518249c2629c065fb8f36ccd7d32311c5a5490e9bc7e7cbcc0367a3d"} Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.640886 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="286150e5518249c2629c065fb8f36ccd7d32311c5a5490e9bc7e7cbcc0367a3d" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.912672 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj"] Feb 18 20:10:22 crc kubenswrapper[4932]: E0218 20:10:22.913143 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c71e78bd-5a3a-437b-8ca4-4fbebf52d75c" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.913168 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="c71e78bd-5a3a-437b-8ca4-4fbebf52d75c" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 20:10:22 crc kubenswrapper[4932]: E0218 20:10:22.913205 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d46440a6-e998-48df-a6ee-83e196dc6f97" containerName="extract-utilities" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.913213 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="d46440a6-e998-48df-a6ee-83e196dc6f97" containerName="extract-utilities" Feb 18 20:10:22 crc kubenswrapper[4932]: E0218 20:10:22.913246 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d46440a6-e998-48df-a6ee-83e196dc6f97" containerName="extract-content" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.913255 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="d46440a6-e998-48df-a6ee-83e196dc6f97" containerName="extract-content" Feb 18 20:10:22 crc kubenswrapper[4932]: E0218 20:10:22.913268 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d46440a6-e998-48df-a6ee-83e196dc6f97" containerName="registry-server" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.913276 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="d46440a6-e998-48df-a6ee-83e196dc6f97" containerName="registry-server" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.913487 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="c71e78bd-5a3a-437b-8ca4-4fbebf52d75c" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.913526 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="d46440a6-e998-48df-a6ee-83e196dc6f97" containerName="registry-server" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.914399 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.916615 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.916905 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.916915 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vjvmw" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.919623 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.924099 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj"] Feb 18 20:10:22 crc kubenswrapper[4932]: I0218 20:10:22.955921 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:10:23 crc kubenswrapper[4932]: I0218 20:10:23.056641 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:10:23 crc kubenswrapper[4932]: I0218 20:10:23.056807 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:10:23 crc kubenswrapper[4932]: I0218 20:10:23.056866 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:10:23 crc kubenswrapper[4932]: I0218 20:10:23.056919 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf6nh\" (UniqueName: \"kubernetes.io/projected/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-kube-api-access-gf6nh\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:10:23 crc kubenswrapper[4932]: I0218 20:10:23.057292 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:10:23 crc kubenswrapper[4932]: I0218 20:10:23.159701 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:10:23 crc kubenswrapper[4932]: I0218 20:10:23.160110 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:10:23 crc kubenswrapper[4932]: I0218 20:10:23.160139 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:10:23 crc kubenswrapper[4932]: I0218 20:10:23.160205 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf6nh\" (UniqueName: \"kubernetes.io/projected/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-kube-api-access-gf6nh\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:10:23 crc kubenswrapper[4932]: I0218 20:10:23.160315 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:10:23 crc kubenswrapper[4932]: I0218 20:10:23.170757 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:10:23 crc kubenswrapper[4932]: I0218 20:10:23.171029 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:10:23 crc kubenswrapper[4932]: I0218 20:10:23.171731 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:10:23 crc kubenswrapper[4932]: I0218 20:10:23.174579 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:10:23 crc kubenswrapper[4932]: I0218 20:10:23.180591 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf6nh\" (UniqueName: \"kubernetes.io/projected/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-kube-api-access-gf6nh\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:10:23 crc kubenswrapper[4932]: I0218 20:10:23.264762 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:10:23 crc kubenswrapper[4932]: W0218 20:10:23.784415 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc47c3fb_c74e_42df_ba84_e4c58dbbe796.slice/crio-47361800b657b675d446710f1d30d0a24640937a20117a73ecbb476cd259604d WatchSource:0}: Error finding container 47361800b657b675d446710f1d30d0a24640937a20117a73ecbb476cd259604d: Status 404 returned error can't find the container with id 47361800b657b675d446710f1d30d0a24640937a20117a73ecbb476cd259604d Feb 18 20:10:23 crc kubenswrapper[4932]: I0218 20:10:23.786477 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj"] Feb 18 20:10:24 crc kubenswrapper[4932]: I0218 20:10:24.661166 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" event={"ID":"bc47c3fb-c74e-42df-ba84-e4c58dbbe796","Type":"ContainerStarted","Data":"bf95cd775c67b15f2eb4cba258d588c2723c45e736242db964a5a9abd7443fcb"} Feb 18 20:10:24 crc kubenswrapper[4932]: I0218 20:10:24.661540 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" event={"ID":"bc47c3fb-c74e-42df-ba84-e4c58dbbe796","Type":"ContainerStarted","Data":"47361800b657b675d446710f1d30d0a24640937a20117a73ecbb476cd259604d"} Feb 18 20:10:24 crc kubenswrapper[4932]: I0218 20:10:24.681078 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" podStartSLOduration=2.180694446 podStartE2EDuration="2.681059177s" podCreationTimestamp="2026-02-18 20:10:22 +0000 UTC" firstStartedPulling="2026-02-18 20:10:23.787217645 +0000 UTC m=+2187.369172490" lastFinishedPulling="2026-02-18 20:10:24.287582376 +0000 UTC m=+2187.869537221" observedRunningTime="2026-02-18 20:10:24.677278143 +0000 UTC m=+2188.259232988" watchObservedRunningTime="2026-02-18 20:10:24.681059177 +0000 UTC m=+2188.263014022" Feb 18 20:10:27 crc kubenswrapper[4932]: I0218 20:10:27.606130 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:10:27 crc kubenswrapper[4932]: I0218 20:10:27.606764 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:10:57 crc kubenswrapper[4932]: I0218 20:10:57.606310 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:10:57 crc kubenswrapper[4932]: I0218 20:10:57.607016 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:11:27 crc kubenswrapper[4932]: I0218 20:11:27.606102 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:11:27 crc kubenswrapper[4932]: I0218 20:11:27.606602 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:11:27 crc kubenswrapper[4932]: I0218 20:11:27.606648 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 20:11:27 crc kubenswrapper[4932]: I0218 20:11:27.607186 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:11:27 crc kubenswrapper[4932]: I0218 20:11:27.607294 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" gracePeriod=600 Feb 18 20:11:27 crc kubenswrapper[4932]: E0218 20:11:27.731868 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:11:28 crc kubenswrapper[4932]: I0218 20:11:28.347565 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" exitCode=0 Feb 18 20:11:28 crc kubenswrapper[4932]: I0218 20:11:28.347631 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f"} Feb 18 20:11:28 crc kubenswrapper[4932]: I0218 20:11:28.347687 4932 scope.go:117] "RemoveContainer" containerID="93b2aadde96a1cb53f394f160a8c65ff537540cf335aacf73c90625c7fb96dd4" Feb 18 20:11:28 crc kubenswrapper[4932]: I0218 20:11:28.348828 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:11:28 crc kubenswrapper[4932]: E0218 20:11:28.349427 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:11:43 crc kubenswrapper[4932]: I0218 20:11:43.180791 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:11:43 crc kubenswrapper[4932]: E0218 20:11:43.182252 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:11:57 crc kubenswrapper[4932]: I0218 20:11:57.180507 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:11:57 crc kubenswrapper[4932]: E0218 20:11:57.181642 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:12:12 crc kubenswrapper[4932]: I0218 20:12:12.179782 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:12:12 crc kubenswrapper[4932]: E0218 20:12:12.180763 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:12:27 crc kubenswrapper[4932]: I0218 20:12:27.188695 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:12:27 crc kubenswrapper[4932]: E0218 20:12:27.189443 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:12:38 crc kubenswrapper[4932]: I0218 20:12:38.179547 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:12:38 crc kubenswrapper[4932]: E0218 20:12:38.180441 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:12:51 crc kubenswrapper[4932]: I0218 20:12:51.180515 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:12:51 crc kubenswrapper[4932]: E0218 20:12:51.181255 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:13:05 crc kubenswrapper[4932]: I0218 20:13:05.179844 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:13:05 crc kubenswrapper[4932]: E0218 20:13:05.180961 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:13:18 crc kubenswrapper[4932]: I0218 20:13:18.180495 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:13:18 crc kubenswrapper[4932]: E0218 20:13:18.181843 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:13:29 crc kubenswrapper[4932]: I0218 20:13:29.179705 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:13:29 crc kubenswrapper[4932]: E0218 20:13:29.180572 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:13:44 crc kubenswrapper[4932]: I0218 20:13:44.179815 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:13:44 crc kubenswrapper[4932]: E0218 20:13:44.180829 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:13:58 crc kubenswrapper[4932]: I0218 20:13:58.180977 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:13:58 crc kubenswrapper[4932]: E0218 20:13:58.183368 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:14:09 crc kubenswrapper[4932]: I0218 20:14:09.179973 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:14:09 crc kubenswrapper[4932]: E0218 20:14:09.180704 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:14:18 crc kubenswrapper[4932]: I0218 20:14:18.136963 4932 generic.go:334] "Generic (PLEG): container finished" podID="bc47c3fb-c74e-42df-ba84-e4c58dbbe796" containerID="bf95cd775c67b15f2eb4cba258d588c2723c45e736242db964a5a9abd7443fcb" exitCode=0 Feb 18 20:14:18 crc kubenswrapper[4932]: I0218 20:14:18.137124 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" event={"ID":"bc47c3fb-c74e-42df-ba84-e4c58dbbe796","Type":"ContainerDied","Data":"bf95cd775c67b15f2eb4cba258d588c2723c45e736242db964a5a9abd7443fcb"} Feb 18 20:14:19 crc kubenswrapper[4932]: I0218 20:14:19.574444 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:14:19 crc kubenswrapper[4932]: I0218 20:14:19.675477 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf6nh\" (UniqueName: \"kubernetes.io/projected/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-kube-api-access-gf6nh\") pod \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " Feb 18 20:14:19 crc kubenswrapper[4932]: I0218 20:14:19.675555 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-libvirt-secret-0\") pod \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " Feb 18 20:14:19 crc kubenswrapper[4932]: I0218 20:14:19.675671 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-ssh-key-openstack-edpm-ipam\") pod \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " Feb 18 20:14:19 crc kubenswrapper[4932]: I0218 20:14:19.675744 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-libvirt-combined-ca-bundle\") pod \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " Feb 18 20:14:19 crc kubenswrapper[4932]: I0218 20:14:19.675794 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-inventory\") pod \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\" (UID: \"bc47c3fb-c74e-42df-ba84-e4c58dbbe796\") " Feb 18 20:14:19 crc kubenswrapper[4932]: I0218 20:14:19.683560 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "bc47c3fb-c74e-42df-ba84-e4c58dbbe796" (UID: "bc47c3fb-c74e-42df-ba84-e4c58dbbe796"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:14:19 crc kubenswrapper[4932]: I0218 20:14:19.683617 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-kube-api-access-gf6nh" (OuterVolumeSpecName: "kube-api-access-gf6nh") pod "bc47c3fb-c74e-42df-ba84-e4c58dbbe796" (UID: "bc47c3fb-c74e-42df-ba84-e4c58dbbe796"). InnerVolumeSpecName "kube-api-access-gf6nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:14:19 crc kubenswrapper[4932]: I0218 20:14:19.707966 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bc47c3fb-c74e-42df-ba84-e4c58dbbe796" (UID: "bc47c3fb-c74e-42df-ba84-e4c58dbbe796"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:14:19 crc kubenswrapper[4932]: I0218 20:14:19.720141 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "bc47c3fb-c74e-42df-ba84-e4c58dbbe796" (UID: "bc47c3fb-c74e-42df-ba84-e4c58dbbe796"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:14:19 crc kubenswrapper[4932]: I0218 20:14:19.733416 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-inventory" (OuterVolumeSpecName: "inventory") pod "bc47c3fb-c74e-42df-ba84-e4c58dbbe796" (UID: "bc47c3fb-c74e-42df-ba84-e4c58dbbe796"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:14:19 crc kubenswrapper[4932]: I0218 20:14:19.778567 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:14:19 crc kubenswrapper[4932]: I0218 20:14:19.778655 4932 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:14:19 crc kubenswrapper[4932]: I0218 20:14:19.778680 4932 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:14:19 crc kubenswrapper[4932]: I0218 20:14:19.778700 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf6nh\" (UniqueName: \"kubernetes.io/projected/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-kube-api-access-gf6nh\") on node \"crc\" DevicePath \"\"" Feb 18 20:14:19 crc kubenswrapper[4932]: I0218 20:14:19.778720 4932 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/bc47c3fb-c74e-42df-ba84-e4c58dbbe796-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.162746 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" event={"ID":"bc47c3fb-c74e-42df-ba84-e4c58dbbe796","Type":"ContainerDied","Data":"47361800b657b675d446710f1d30d0a24640937a20117a73ecbb476cd259604d"} Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.162813 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47361800b657b675d446710f1d30d0a24640937a20117a73ecbb476cd259604d" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.163413 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-9kcnj" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.296619 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk"] Feb 18 20:14:20 crc kubenswrapper[4932]: E0218 20:14:20.297265 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc47c3fb-c74e-42df-ba84-e4c58dbbe796" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.297360 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc47c3fb-c74e-42df-ba84-e4c58dbbe796" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.297607 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc47c3fb-c74e-42df-ba84-e4c58dbbe796" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.298396 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.301227 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.301313 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.303220 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.305092 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.305111 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.305591 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.306256 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vjvmw" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.319274 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk"] Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.391712 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.392045 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9vzz\" (UniqueName: \"kubernetes.io/projected/8af71c97-85dc-46f5-9fe0-7e4827f3e981-kube-api-access-g9vzz\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.392228 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.392315 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.392451 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.392592 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.392681 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.392762 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.392839 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.494408 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.495075 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.495276 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.495463 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.495687 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.495866 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9vzz\" (UniqueName: \"kubernetes.io/projected/8af71c97-85dc-46f5-9fe0-7e4827f3e981-kube-api-access-g9vzz\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.496063 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.496271 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.496576 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.498011 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.500503 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.500575 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.501610 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.502186 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.502199 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.502243 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.502463 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.522779 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9vzz\" (UniqueName: \"kubernetes.io/projected/8af71c97-85dc-46f5-9fe0-7e4827f3e981-kube-api-access-g9vzz\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6d4tk\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:20 crc kubenswrapper[4932]: I0218 20:14:20.617578 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:14:21 crc kubenswrapper[4932]: I0218 20:14:21.199628 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk"] Feb 18 20:14:21 crc kubenswrapper[4932]: I0218 20:14:21.203736 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:14:22 crc kubenswrapper[4932]: I0218 20:14:22.178802 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:14:22 crc kubenswrapper[4932]: E0218 20:14:22.179290 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:14:22 crc kubenswrapper[4932]: I0218 20:14:22.182093 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" event={"ID":"8af71c97-85dc-46f5-9fe0-7e4827f3e981","Type":"ContainerStarted","Data":"92107d2c8f069f6435154109fac7c79f113f6ea4864ec704bdf5eec52f4f21b0"} Feb 18 20:14:22 crc kubenswrapper[4932]: I0218 20:14:22.182125 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" event={"ID":"8af71c97-85dc-46f5-9fe0-7e4827f3e981","Type":"ContainerStarted","Data":"95205dea94bf7eafa594ecbadc936061cd8dadbc7f5242ac8efb85f35357d2f5"} Feb 18 20:14:22 crc kubenswrapper[4932]: I0218 20:14:22.206142 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" podStartSLOduration=1.7491718939999998 podStartE2EDuration="2.206117108s" podCreationTimestamp="2026-02-18 20:14:20 +0000 UTC" firstStartedPulling="2026-02-18 20:14:21.203534221 +0000 UTC m=+2424.785489066" lastFinishedPulling="2026-02-18 20:14:21.660479425 +0000 UTC m=+2425.242434280" observedRunningTime="2026-02-18 20:14:22.200140339 +0000 UTC m=+2425.782095194" watchObservedRunningTime="2026-02-18 20:14:22.206117108 +0000 UTC m=+2425.788071973" Feb 18 20:14:34 crc kubenswrapper[4932]: I0218 20:14:34.179825 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:14:34 crc kubenswrapper[4932]: E0218 20:14:34.180586 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:14:45 crc kubenswrapper[4932]: I0218 20:14:45.179812 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:14:45 crc kubenswrapper[4932]: E0218 20:14:45.181307 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:14:57 crc kubenswrapper[4932]: I0218 20:14:57.197694 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:14:57 crc kubenswrapper[4932]: E0218 20:14:57.198726 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:15:00 crc kubenswrapper[4932]: I0218 20:15:00.144572 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl"] Feb 18 20:15:00 crc kubenswrapper[4932]: I0218 20:15:00.147183 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl" Feb 18 20:15:00 crc kubenswrapper[4932]: I0218 20:15:00.151739 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 20:15:00 crc kubenswrapper[4932]: I0218 20:15:00.152589 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 20:15:00 crc kubenswrapper[4932]: I0218 20:15:00.154698 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl"] Feb 18 20:15:00 crc kubenswrapper[4932]: I0218 20:15:00.227357 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43cf3e74-b4e7-4f54-b21c-cf9018235782-config-volume\") pod \"collect-profiles-29524095-k4shl\" (UID: \"43cf3e74-b4e7-4f54-b21c-cf9018235782\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl" Feb 18 20:15:00 crc kubenswrapper[4932]: I0218 20:15:00.227529 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szgvq\" (UniqueName: \"kubernetes.io/projected/43cf3e74-b4e7-4f54-b21c-cf9018235782-kube-api-access-szgvq\") pod \"collect-profiles-29524095-k4shl\" (UID: \"43cf3e74-b4e7-4f54-b21c-cf9018235782\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl" Feb 18 20:15:00 crc kubenswrapper[4932]: I0218 20:15:00.227580 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/43cf3e74-b4e7-4f54-b21c-cf9018235782-secret-volume\") pod \"collect-profiles-29524095-k4shl\" (UID: \"43cf3e74-b4e7-4f54-b21c-cf9018235782\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl" Feb 18 20:15:00 crc kubenswrapper[4932]: I0218 20:15:00.330331 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/43cf3e74-b4e7-4f54-b21c-cf9018235782-secret-volume\") pod \"collect-profiles-29524095-k4shl\" (UID: \"43cf3e74-b4e7-4f54-b21c-cf9018235782\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl" Feb 18 20:15:00 crc kubenswrapper[4932]: I0218 20:15:00.330548 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43cf3e74-b4e7-4f54-b21c-cf9018235782-config-volume\") pod \"collect-profiles-29524095-k4shl\" (UID: \"43cf3e74-b4e7-4f54-b21c-cf9018235782\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl" Feb 18 20:15:00 crc kubenswrapper[4932]: I0218 20:15:00.330686 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szgvq\" (UniqueName: \"kubernetes.io/projected/43cf3e74-b4e7-4f54-b21c-cf9018235782-kube-api-access-szgvq\") pod \"collect-profiles-29524095-k4shl\" (UID: \"43cf3e74-b4e7-4f54-b21c-cf9018235782\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl" Feb 18 20:15:00 crc kubenswrapper[4932]: I0218 20:15:00.331361 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43cf3e74-b4e7-4f54-b21c-cf9018235782-config-volume\") pod \"collect-profiles-29524095-k4shl\" (UID: \"43cf3e74-b4e7-4f54-b21c-cf9018235782\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl" Feb 18 20:15:00 crc kubenswrapper[4932]: I0218 20:15:00.336093 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/43cf3e74-b4e7-4f54-b21c-cf9018235782-secret-volume\") pod \"collect-profiles-29524095-k4shl\" (UID: \"43cf3e74-b4e7-4f54-b21c-cf9018235782\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl" Feb 18 20:15:00 crc kubenswrapper[4932]: I0218 20:15:00.346488 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szgvq\" (UniqueName: \"kubernetes.io/projected/43cf3e74-b4e7-4f54-b21c-cf9018235782-kube-api-access-szgvq\") pod \"collect-profiles-29524095-k4shl\" (UID: \"43cf3e74-b4e7-4f54-b21c-cf9018235782\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl" Feb 18 20:15:00 crc kubenswrapper[4932]: I0218 20:15:00.481421 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl" Feb 18 20:15:00 crc kubenswrapper[4932]: I0218 20:15:00.940412 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl"] Feb 18 20:15:01 crc kubenswrapper[4932]: I0218 20:15:01.602782 4932 generic.go:334] "Generic (PLEG): container finished" podID="43cf3e74-b4e7-4f54-b21c-cf9018235782" containerID="b79875069ecc1431ced41ee0aadf13bbe89c7ee6b34078234cc6eb1c6d79dd0b" exitCode=0 Feb 18 20:15:01 crc kubenswrapper[4932]: I0218 20:15:01.602832 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl" event={"ID":"43cf3e74-b4e7-4f54-b21c-cf9018235782","Type":"ContainerDied","Data":"b79875069ecc1431ced41ee0aadf13bbe89c7ee6b34078234cc6eb1c6d79dd0b"} Feb 18 20:15:01 crc kubenswrapper[4932]: I0218 20:15:01.602861 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl" event={"ID":"43cf3e74-b4e7-4f54-b21c-cf9018235782","Type":"ContainerStarted","Data":"29e750d064faaab62028bef54cf34b20e09184983057ee8ad64092c7db2f70cf"} Feb 18 20:15:02 crc kubenswrapper[4932]: I0218 20:15:02.979834 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl" Feb 18 20:15:03 crc kubenswrapper[4932]: I0218 20:15:03.088156 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43cf3e74-b4e7-4f54-b21c-cf9018235782-config-volume\") pod \"43cf3e74-b4e7-4f54-b21c-cf9018235782\" (UID: \"43cf3e74-b4e7-4f54-b21c-cf9018235782\") " Feb 18 20:15:03 crc kubenswrapper[4932]: I0218 20:15:03.088235 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/43cf3e74-b4e7-4f54-b21c-cf9018235782-secret-volume\") pod \"43cf3e74-b4e7-4f54-b21c-cf9018235782\" (UID: \"43cf3e74-b4e7-4f54-b21c-cf9018235782\") " Feb 18 20:15:03 crc kubenswrapper[4932]: I0218 20:15:03.088311 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szgvq\" (UniqueName: \"kubernetes.io/projected/43cf3e74-b4e7-4f54-b21c-cf9018235782-kube-api-access-szgvq\") pod \"43cf3e74-b4e7-4f54-b21c-cf9018235782\" (UID: \"43cf3e74-b4e7-4f54-b21c-cf9018235782\") " Feb 18 20:15:03 crc kubenswrapper[4932]: I0218 20:15:03.089144 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43cf3e74-b4e7-4f54-b21c-cf9018235782-config-volume" (OuterVolumeSpecName: "config-volume") pod "43cf3e74-b4e7-4f54-b21c-cf9018235782" (UID: "43cf3e74-b4e7-4f54-b21c-cf9018235782"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:15:03 crc kubenswrapper[4932]: I0218 20:15:03.093864 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43cf3e74-b4e7-4f54-b21c-cf9018235782-kube-api-access-szgvq" (OuterVolumeSpecName: "kube-api-access-szgvq") pod "43cf3e74-b4e7-4f54-b21c-cf9018235782" (UID: "43cf3e74-b4e7-4f54-b21c-cf9018235782"). InnerVolumeSpecName "kube-api-access-szgvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:15:03 crc kubenswrapper[4932]: I0218 20:15:03.100546 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43cf3e74-b4e7-4f54-b21c-cf9018235782-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "43cf3e74-b4e7-4f54-b21c-cf9018235782" (UID: "43cf3e74-b4e7-4f54-b21c-cf9018235782"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:15:03 crc kubenswrapper[4932]: I0218 20:15:03.201207 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szgvq\" (UniqueName: \"kubernetes.io/projected/43cf3e74-b4e7-4f54-b21c-cf9018235782-kube-api-access-szgvq\") on node \"crc\" DevicePath \"\"" Feb 18 20:15:03 crc kubenswrapper[4932]: I0218 20:15:03.201273 4932 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43cf3e74-b4e7-4f54-b21c-cf9018235782-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:15:03 crc kubenswrapper[4932]: I0218 20:15:03.201287 4932 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/43cf3e74-b4e7-4f54-b21c-cf9018235782-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:15:03 crc kubenswrapper[4932]: I0218 20:15:03.624659 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl" event={"ID":"43cf3e74-b4e7-4f54-b21c-cf9018235782","Type":"ContainerDied","Data":"29e750d064faaab62028bef54cf34b20e09184983057ee8ad64092c7db2f70cf"} Feb 18 20:15:03 crc kubenswrapper[4932]: I0218 20:15:03.624736 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29e750d064faaab62028bef54cf34b20e09184983057ee8ad64092c7db2f70cf" Feb 18 20:15:03 crc kubenswrapper[4932]: I0218 20:15:03.624811 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl" Feb 18 20:15:04 crc kubenswrapper[4932]: I0218 20:15:04.055958 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc"] Feb 18 20:15:04 crc kubenswrapper[4932]: I0218 20:15:04.064697 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524050-46gfc"] Feb 18 20:15:05 crc kubenswrapper[4932]: I0218 20:15:05.192649 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="048e17bc-05bf-40e4-9f40-87d936fcf772" path="/var/lib/kubelet/pods/048e17bc-05bf-40e4-9f40-87d936fcf772/volumes" Feb 18 20:15:05 crc kubenswrapper[4932]: I0218 20:15:05.879013 4932 scope.go:117] "RemoveContainer" containerID="67de493045dcef40d7dcd7366beacc478832b3155bced8f9164fd20b4a4dc42d" Feb 18 20:15:12 crc kubenswrapper[4932]: I0218 20:15:12.179478 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:15:12 crc kubenswrapper[4932]: E0218 20:15:12.180585 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:15:23 crc kubenswrapper[4932]: I0218 20:15:23.179520 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:15:23 crc kubenswrapper[4932]: E0218 20:15:23.180409 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:15:37 crc kubenswrapper[4932]: I0218 20:15:37.198569 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:15:37 crc kubenswrapper[4932]: E0218 20:15:37.199425 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:15:52 crc kubenswrapper[4932]: I0218 20:15:52.179818 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:15:52 crc kubenswrapper[4932]: E0218 20:15:52.180564 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:16:05 crc kubenswrapper[4932]: I0218 20:16:05.180129 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:16:05 crc kubenswrapper[4932]: E0218 20:16:05.181533 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:16:05 crc kubenswrapper[4932]: I0218 20:16:05.957015 4932 scope.go:117] "RemoveContainer" containerID="dc22e969de30f320df29d8fb3300ead1ccce40e8fd0a68c631edab60b4aae345" Feb 18 20:16:05 crc kubenswrapper[4932]: I0218 20:16:05.998851 4932 scope.go:117] "RemoveContainer" containerID="d5e3d44b8c687a4ed0339e8d165e82f6353dbc670604a3f79e4367feb38ddc1a" Feb 18 20:16:06 crc kubenswrapper[4932]: I0218 20:16:06.063449 4932 scope.go:117] "RemoveContainer" containerID="ceae045b7c2c872fa7d9d4bedd72b0329dffb4ff15e8c477344b3b13adecbd9f" Feb 18 20:16:19 crc kubenswrapper[4932]: I0218 20:16:19.179705 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:16:19 crc kubenswrapper[4932]: E0218 20:16:19.180906 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:16:34 crc kubenswrapper[4932]: I0218 20:16:34.182136 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:16:34 crc kubenswrapper[4932]: I0218 20:16:34.689154 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"0d641d1880a050cdf1021a445fa79e88f90ca1f340fe0f38bc6a038f7b103aec"} Feb 18 20:16:51 crc kubenswrapper[4932]: I0218 20:16:51.878728 4932 generic.go:334] "Generic (PLEG): container finished" podID="8af71c97-85dc-46f5-9fe0-7e4827f3e981" containerID="92107d2c8f069f6435154109fac7c79f113f6ea4864ec704bdf5eec52f4f21b0" exitCode=0 Feb 18 20:16:51 crc kubenswrapper[4932]: I0218 20:16:51.878849 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" event={"ID":"8af71c97-85dc-46f5-9fe0-7e4827f3e981","Type":"ContainerDied","Data":"92107d2c8f069f6435154109fac7c79f113f6ea4864ec704bdf5eec52f4f21b0"} Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.321794 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.446408 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-combined-ca-bundle\") pod \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.446479 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9vzz\" (UniqueName: \"kubernetes.io/projected/8af71c97-85dc-46f5-9fe0-7e4827f3e981-kube-api-access-g9vzz\") pod \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.446525 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-extra-config-0\") pod \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.446601 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-cell1-compute-config-0\") pod \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.446629 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-ssh-key-openstack-edpm-ipam\") pod \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.446731 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-migration-ssh-key-0\") pod \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.446773 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-cell1-compute-config-1\") pod \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.446873 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-migration-ssh-key-1\") pod \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.446973 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-inventory\") pod \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\" (UID: \"8af71c97-85dc-46f5-9fe0-7e4827f3e981\") " Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.452347 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "8af71c97-85dc-46f5-9fe0-7e4827f3e981" (UID: "8af71c97-85dc-46f5-9fe0-7e4827f3e981"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.453851 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8af71c97-85dc-46f5-9fe0-7e4827f3e981-kube-api-access-g9vzz" (OuterVolumeSpecName: "kube-api-access-g9vzz") pod "8af71c97-85dc-46f5-9fe0-7e4827f3e981" (UID: "8af71c97-85dc-46f5-9fe0-7e4827f3e981"). InnerVolumeSpecName "kube-api-access-g9vzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.478342 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "8af71c97-85dc-46f5-9fe0-7e4827f3e981" (UID: "8af71c97-85dc-46f5-9fe0-7e4827f3e981"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.479685 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-inventory" (OuterVolumeSpecName: "inventory") pod "8af71c97-85dc-46f5-9fe0-7e4827f3e981" (UID: "8af71c97-85dc-46f5-9fe0-7e4827f3e981"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.479894 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "8af71c97-85dc-46f5-9fe0-7e4827f3e981" (UID: "8af71c97-85dc-46f5-9fe0-7e4827f3e981"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.481407 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "8af71c97-85dc-46f5-9fe0-7e4827f3e981" (UID: "8af71c97-85dc-46f5-9fe0-7e4827f3e981"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.483508 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8af71c97-85dc-46f5-9fe0-7e4827f3e981" (UID: "8af71c97-85dc-46f5-9fe0-7e4827f3e981"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.487408 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "8af71c97-85dc-46f5-9fe0-7e4827f3e981" (UID: "8af71c97-85dc-46f5-9fe0-7e4827f3e981"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.487486 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "8af71c97-85dc-46f5-9fe0-7e4827f3e981" (UID: "8af71c97-85dc-46f5-9fe0-7e4827f3e981"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.549232 4932 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.549268 4932 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.549281 4932 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.549292 4932 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.549306 4932 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.549316 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9vzz\" (UniqueName: \"kubernetes.io/projected/8af71c97-85dc-46f5-9fe0-7e4827f3e981-kube-api-access-g9vzz\") on node \"crc\" DevicePath \"\"" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.549329 4932 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.549340 4932 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.549383 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8af71c97-85dc-46f5-9fe0-7e4827f3e981-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.896871 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" event={"ID":"8af71c97-85dc-46f5-9fe0-7e4827f3e981","Type":"ContainerDied","Data":"95205dea94bf7eafa594ecbadc936061cd8dadbc7f5242ac8efb85f35357d2f5"} Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.896920 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95205dea94bf7eafa594ecbadc936061cd8dadbc7f5242ac8efb85f35357d2f5" Feb 18 20:16:53 crc kubenswrapper[4932]: I0218 20:16:53.896917 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6d4tk" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.095691 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv"] Feb 18 20:16:54 crc kubenswrapper[4932]: E0218 20:16:54.096529 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8af71c97-85dc-46f5-9fe0-7e4827f3e981" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.096555 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="8af71c97-85dc-46f5-9fe0-7e4827f3e981" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 20:16:54 crc kubenswrapper[4932]: E0218 20:16:54.096575 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43cf3e74-b4e7-4f54-b21c-cf9018235782" containerName="collect-profiles" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.096583 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="43cf3e74-b4e7-4f54-b21c-cf9018235782" containerName="collect-profiles" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.096821 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="8af71c97-85dc-46f5-9fe0-7e4827f3e981" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.096847 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="43cf3e74-b4e7-4f54-b21c-cf9018235782" containerName="collect-profiles" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.097735 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.100114 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.100821 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vjvmw" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.101433 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.101455 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.101681 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.107811 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv"] Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.165014 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.165523 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.165663 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.165799 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.165935 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.166023 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.166261 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbg4q\" (UniqueName: \"kubernetes.io/projected/438e3417-67a9-417c-9e75-d0e207ab1812-kube-api-access-nbg4q\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.268646 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.268719 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.268747 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.268773 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.268821 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.268843 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.268903 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbg4q\" (UniqueName: \"kubernetes.io/projected/438e3417-67a9-417c-9e75-d0e207ab1812-kube-api-access-nbg4q\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.274508 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.274530 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.274939 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.275095 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.275322 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.277110 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.286969 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbg4q\" (UniqueName: \"kubernetes.io/projected/438e3417-67a9-417c-9e75-d0e207ab1812-kube-api-access-nbg4q\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.415501 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:16:54 crc kubenswrapper[4932]: I0218 20:16:54.943079 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv"] Feb 18 20:16:55 crc kubenswrapper[4932]: I0218 20:16:55.917638 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" event={"ID":"438e3417-67a9-417c-9e75-d0e207ab1812","Type":"ContainerStarted","Data":"0ff3dd8901a960fc64101e349f14f625d265537feec7a85d08ecad6a7f58dcfd"} Feb 18 20:16:55 crc kubenswrapper[4932]: I0218 20:16:55.918078 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" event={"ID":"438e3417-67a9-417c-9e75-d0e207ab1812","Type":"ContainerStarted","Data":"9fedb3dcc828bf71bbbf9cbf690773fa436b77d77990e905ec72e0916360e9dd"} Feb 18 20:16:55 crc kubenswrapper[4932]: I0218 20:16:55.940935 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" podStartSLOduration=1.486760805 podStartE2EDuration="1.94091338s" podCreationTimestamp="2026-02-18 20:16:54 +0000 UTC" firstStartedPulling="2026-02-18 20:16:54.957243852 +0000 UTC m=+2578.539198697" lastFinishedPulling="2026-02-18 20:16:55.411396427 +0000 UTC m=+2578.993351272" observedRunningTime="2026-02-18 20:16:55.937048025 +0000 UTC m=+2579.519002870" watchObservedRunningTime="2026-02-18 20:16:55.94091338 +0000 UTC m=+2579.522868225" Feb 18 20:17:50 crc kubenswrapper[4932]: I0218 20:17:50.661157 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wmn86"] Feb 18 20:17:50 crc kubenswrapper[4932]: I0218 20:17:50.663817 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wmn86" Feb 18 20:17:50 crc kubenswrapper[4932]: I0218 20:17:50.673918 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wmn86"] Feb 18 20:17:50 crc kubenswrapper[4932]: I0218 20:17:50.833321 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bccb4e09-25d0-498e-92d0-dac8572db926-utilities\") pod \"certified-operators-wmn86\" (UID: \"bccb4e09-25d0-498e-92d0-dac8572db926\") " pod="openshift-marketplace/certified-operators-wmn86" Feb 18 20:17:50 crc kubenswrapper[4932]: I0218 20:17:50.833382 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bccb4e09-25d0-498e-92d0-dac8572db926-catalog-content\") pod \"certified-operators-wmn86\" (UID: \"bccb4e09-25d0-498e-92d0-dac8572db926\") " pod="openshift-marketplace/certified-operators-wmn86" Feb 18 20:17:50 crc kubenswrapper[4932]: I0218 20:17:50.833423 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5zfr\" (UniqueName: \"kubernetes.io/projected/bccb4e09-25d0-498e-92d0-dac8572db926-kube-api-access-r5zfr\") pod \"certified-operators-wmn86\" (UID: \"bccb4e09-25d0-498e-92d0-dac8572db926\") " pod="openshift-marketplace/certified-operators-wmn86" Feb 18 20:17:50 crc kubenswrapper[4932]: I0218 20:17:50.936030 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bccb4e09-25d0-498e-92d0-dac8572db926-catalog-content\") pod \"certified-operators-wmn86\" (UID: \"bccb4e09-25d0-498e-92d0-dac8572db926\") " pod="openshift-marketplace/certified-operators-wmn86" Feb 18 20:17:50 crc kubenswrapper[4932]: I0218 20:17:50.936093 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5zfr\" (UniqueName: \"kubernetes.io/projected/bccb4e09-25d0-498e-92d0-dac8572db926-kube-api-access-r5zfr\") pod \"certified-operators-wmn86\" (UID: \"bccb4e09-25d0-498e-92d0-dac8572db926\") " pod="openshift-marketplace/certified-operators-wmn86" Feb 18 20:17:50 crc kubenswrapper[4932]: I0218 20:17:50.936248 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bccb4e09-25d0-498e-92d0-dac8572db926-utilities\") pod \"certified-operators-wmn86\" (UID: \"bccb4e09-25d0-498e-92d0-dac8572db926\") " pod="openshift-marketplace/certified-operators-wmn86" Feb 18 20:17:50 crc kubenswrapper[4932]: I0218 20:17:50.936536 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bccb4e09-25d0-498e-92d0-dac8572db926-catalog-content\") pod \"certified-operators-wmn86\" (UID: \"bccb4e09-25d0-498e-92d0-dac8572db926\") " pod="openshift-marketplace/certified-operators-wmn86" Feb 18 20:17:50 crc kubenswrapper[4932]: I0218 20:17:50.936855 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bccb4e09-25d0-498e-92d0-dac8572db926-utilities\") pod \"certified-operators-wmn86\" (UID: \"bccb4e09-25d0-498e-92d0-dac8572db926\") " pod="openshift-marketplace/certified-operators-wmn86" Feb 18 20:17:50 crc kubenswrapper[4932]: I0218 20:17:50.957034 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5zfr\" (UniqueName: \"kubernetes.io/projected/bccb4e09-25d0-498e-92d0-dac8572db926-kube-api-access-r5zfr\") pod \"certified-operators-wmn86\" (UID: \"bccb4e09-25d0-498e-92d0-dac8572db926\") " pod="openshift-marketplace/certified-operators-wmn86" Feb 18 20:17:51 crc kubenswrapper[4932]: I0218 20:17:51.007408 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wmn86" Feb 18 20:17:51 crc kubenswrapper[4932]: I0218 20:17:51.530373 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wmn86"] Feb 18 20:17:51 crc kubenswrapper[4932]: I0218 20:17:51.567519 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmn86" event={"ID":"bccb4e09-25d0-498e-92d0-dac8572db926","Type":"ContainerStarted","Data":"c491639733116403151eda258c62a2eee151de18e5318d854dd76fc4c4f42d9a"} Feb 18 20:17:52 crc kubenswrapper[4932]: I0218 20:17:52.581125 4932 generic.go:334] "Generic (PLEG): container finished" podID="bccb4e09-25d0-498e-92d0-dac8572db926" containerID="67837ef7c8fa232bd17e8402fc32a06b0c1f34ca3de06e6f38b86af3a69f57db" exitCode=0 Feb 18 20:17:52 crc kubenswrapper[4932]: I0218 20:17:52.581220 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmn86" event={"ID":"bccb4e09-25d0-498e-92d0-dac8572db926","Type":"ContainerDied","Data":"67837ef7c8fa232bd17e8402fc32a06b0c1f34ca3de06e6f38b86af3a69f57db"} Feb 18 20:17:53 crc kubenswrapper[4932]: I0218 20:17:53.592705 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmn86" event={"ID":"bccb4e09-25d0-498e-92d0-dac8572db926","Type":"ContainerStarted","Data":"9306a9a10517478564a7f903e55025cf2a210eb87f95aaa01cd1caa7ac58be1b"} Feb 18 20:17:54 crc kubenswrapper[4932]: I0218 20:17:54.604492 4932 generic.go:334] "Generic (PLEG): container finished" podID="bccb4e09-25d0-498e-92d0-dac8572db926" containerID="9306a9a10517478564a7f903e55025cf2a210eb87f95aaa01cd1caa7ac58be1b" exitCode=0 Feb 18 20:17:54 crc kubenswrapper[4932]: I0218 20:17:54.604558 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmn86" event={"ID":"bccb4e09-25d0-498e-92d0-dac8572db926","Type":"ContainerDied","Data":"9306a9a10517478564a7f903e55025cf2a210eb87f95aaa01cd1caa7ac58be1b"} Feb 18 20:17:55 crc kubenswrapper[4932]: I0218 20:17:55.623491 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmn86" event={"ID":"bccb4e09-25d0-498e-92d0-dac8572db926","Type":"ContainerStarted","Data":"6d98d52789bbea99da1e328673ef50d3d5a4acd97fa2b7339f1ee4f1a8d05419"} Feb 18 20:17:55 crc kubenswrapper[4932]: I0218 20:17:55.684277 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wmn86" podStartSLOduration=3.255925645 podStartE2EDuration="5.684245054s" podCreationTimestamp="2026-02-18 20:17:50 +0000 UTC" firstStartedPulling="2026-02-18 20:17:52.583563062 +0000 UTC m=+2636.165517947" lastFinishedPulling="2026-02-18 20:17:55.011882461 +0000 UTC m=+2638.593837356" observedRunningTime="2026-02-18 20:17:55.673490198 +0000 UTC m=+2639.255445063" watchObservedRunningTime="2026-02-18 20:17:55.684245054 +0000 UTC m=+2639.266199939" Feb 18 20:18:01 crc kubenswrapper[4932]: I0218 20:18:01.008529 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wmn86" Feb 18 20:18:01 crc kubenswrapper[4932]: I0218 20:18:01.013037 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wmn86" Feb 18 20:18:01 crc kubenswrapper[4932]: I0218 20:18:01.089228 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wmn86" Feb 18 20:18:01 crc kubenswrapper[4932]: I0218 20:18:01.771781 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wmn86" Feb 18 20:18:01 crc kubenswrapper[4932]: I0218 20:18:01.841650 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wmn86"] Feb 18 20:18:03 crc kubenswrapper[4932]: I0218 20:18:03.715130 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wmn86" podUID="bccb4e09-25d0-498e-92d0-dac8572db926" containerName="registry-server" containerID="cri-o://6d98d52789bbea99da1e328673ef50d3d5a4acd97fa2b7339f1ee4f1a8d05419" gracePeriod=2 Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.219994 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wmn86" Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.370101 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bccb4e09-25d0-498e-92d0-dac8572db926-utilities\") pod \"bccb4e09-25d0-498e-92d0-dac8572db926\" (UID: \"bccb4e09-25d0-498e-92d0-dac8572db926\") " Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.370592 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bccb4e09-25d0-498e-92d0-dac8572db926-catalog-content\") pod \"bccb4e09-25d0-498e-92d0-dac8572db926\" (UID: \"bccb4e09-25d0-498e-92d0-dac8572db926\") " Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.371527 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bccb4e09-25d0-498e-92d0-dac8572db926-utilities" (OuterVolumeSpecName: "utilities") pod "bccb4e09-25d0-498e-92d0-dac8572db926" (UID: "bccb4e09-25d0-498e-92d0-dac8572db926"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.375449 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5zfr\" (UniqueName: \"kubernetes.io/projected/bccb4e09-25d0-498e-92d0-dac8572db926-kube-api-access-r5zfr\") pod \"bccb4e09-25d0-498e-92d0-dac8572db926\" (UID: \"bccb4e09-25d0-498e-92d0-dac8572db926\") " Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.376127 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bccb4e09-25d0-498e-92d0-dac8572db926-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.385513 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bccb4e09-25d0-498e-92d0-dac8572db926-kube-api-access-r5zfr" (OuterVolumeSpecName: "kube-api-access-r5zfr") pod "bccb4e09-25d0-498e-92d0-dac8572db926" (UID: "bccb4e09-25d0-498e-92d0-dac8572db926"). InnerVolumeSpecName "kube-api-access-r5zfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.442999 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bccb4e09-25d0-498e-92d0-dac8572db926-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bccb4e09-25d0-498e-92d0-dac8572db926" (UID: "bccb4e09-25d0-498e-92d0-dac8572db926"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.477381 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bccb4e09-25d0-498e-92d0-dac8572db926-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.477412 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5zfr\" (UniqueName: \"kubernetes.io/projected/bccb4e09-25d0-498e-92d0-dac8572db926-kube-api-access-r5zfr\") on node \"crc\" DevicePath \"\"" Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.730446 4932 generic.go:334] "Generic (PLEG): container finished" podID="bccb4e09-25d0-498e-92d0-dac8572db926" containerID="6d98d52789bbea99da1e328673ef50d3d5a4acd97fa2b7339f1ee4f1a8d05419" exitCode=0 Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.730499 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmn86" event={"ID":"bccb4e09-25d0-498e-92d0-dac8572db926","Type":"ContainerDied","Data":"6d98d52789bbea99da1e328673ef50d3d5a4acd97fa2b7339f1ee4f1a8d05419"} Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.730535 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmn86" event={"ID":"bccb4e09-25d0-498e-92d0-dac8572db926","Type":"ContainerDied","Data":"c491639733116403151eda258c62a2eee151de18e5318d854dd76fc4c4f42d9a"} Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.730583 4932 scope.go:117] "RemoveContainer" containerID="6d98d52789bbea99da1e328673ef50d3d5a4acd97fa2b7339f1ee4f1a8d05419" Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.731367 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wmn86" Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.799604 4932 scope.go:117] "RemoveContainer" containerID="9306a9a10517478564a7f903e55025cf2a210eb87f95aaa01cd1caa7ac58be1b" Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.810566 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wmn86"] Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.824243 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wmn86"] Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.824790 4932 scope.go:117] "RemoveContainer" containerID="67837ef7c8fa232bd17e8402fc32a06b0c1f34ca3de06e6f38b86af3a69f57db" Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.891233 4932 scope.go:117] "RemoveContainer" containerID="6d98d52789bbea99da1e328673ef50d3d5a4acd97fa2b7339f1ee4f1a8d05419" Feb 18 20:18:04 crc kubenswrapper[4932]: E0218 20:18:04.892626 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d98d52789bbea99da1e328673ef50d3d5a4acd97fa2b7339f1ee4f1a8d05419\": container with ID starting with 6d98d52789bbea99da1e328673ef50d3d5a4acd97fa2b7339f1ee4f1a8d05419 not found: ID does not exist" containerID="6d98d52789bbea99da1e328673ef50d3d5a4acd97fa2b7339f1ee4f1a8d05419" Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.892663 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d98d52789bbea99da1e328673ef50d3d5a4acd97fa2b7339f1ee4f1a8d05419"} err="failed to get container status \"6d98d52789bbea99da1e328673ef50d3d5a4acd97fa2b7339f1ee4f1a8d05419\": rpc error: code = NotFound desc = could not find container \"6d98d52789bbea99da1e328673ef50d3d5a4acd97fa2b7339f1ee4f1a8d05419\": container with ID starting with 6d98d52789bbea99da1e328673ef50d3d5a4acd97fa2b7339f1ee4f1a8d05419 not found: ID does not exist" Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.892685 4932 scope.go:117] "RemoveContainer" containerID="9306a9a10517478564a7f903e55025cf2a210eb87f95aaa01cd1caa7ac58be1b" Feb 18 20:18:04 crc kubenswrapper[4932]: E0218 20:18:04.893438 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9306a9a10517478564a7f903e55025cf2a210eb87f95aaa01cd1caa7ac58be1b\": container with ID starting with 9306a9a10517478564a7f903e55025cf2a210eb87f95aaa01cd1caa7ac58be1b not found: ID does not exist" containerID="9306a9a10517478564a7f903e55025cf2a210eb87f95aaa01cd1caa7ac58be1b" Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.893465 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9306a9a10517478564a7f903e55025cf2a210eb87f95aaa01cd1caa7ac58be1b"} err="failed to get container status \"9306a9a10517478564a7f903e55025cf2a210eb87f95aaa01cd1caa7ac58be1b\": rpc error: code = NotFound desc = could not find container \"9306a9a10517478564a7f903e55025cf2a210eb87f95aaa01cd1caa7ac58be1b\": container with ID starting with 9306a9a10517478564a7f903e55025cf2a210eb87f95aaa01cd1caa7ac58be1b not found: ID does not exist" Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.893479 4932 scope.go:117] "RemoveContainer" containerID="67837ef7c8fa232bd17e8402fc32a06b0c1f34ca3de06e6f38b86af3a69f57db" Feb 18 20:18:04 crc kubenswrapper[4932]: E0218 20:18:04.893781 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67837ef7c8fa232bd17e8402fc32a06b0c1f34ca3de06e6f38b86af3a69f57db\": container with ID starting with 67837ef7c8fa232bd17e8402fc32a06b0c1f34ca3de06e6f38b86af3a69f57db not found: ID does not exist" containerID="67837ef7c8fa232bd17e8402fc32a06b0c1f34ca3de06e6f38b86af3a69f57db" Feb 18 20:18:04 crc kubenswrapper[4932]: I0218 20:18:04.893803 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67837ef7c8fa232bd17e8402fc32a06b0c1f34ca3de06e6f38b86af3a69f57db"} err="failed to get container status \"67837ef7c8fa232bd17e8402fc32a06b0c1f34ca3de06e6f38b86af3a69f57db\": rpc error: code = NotFound desc = could not find container \"67837ef7c8fa232bd17e8402fc32a06b0c1f34ca3de06e6f38b86af3a69f57db\": container with ID starting with 67837ef7c8fa232bd17e8402fc32a06b0c1f34ca3de06e6f38b86af3a69f57db not found: ID does not exist" Feb 18 20:18:05 crc kubenswrapper[4932]: I0218 20:18:05.193881 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bccb4e09-25d0-498e-92d0-dac8572db926" path="/var/lib/kubelet/pods/bccb4e09-25d0-498e-92d0-dac8572db926/volumes" Feb 18 20:18:57 crc kubenswrapper[4932]: I0218 20:18:57.607311 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:18:57 crc kubenswrapper[4932]: I0218 20:18:57.608364 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:18:59 crc kubenswrapper[4932]: I0218 20:18:59.380263 4932 generic.go:334] "Generic (PLEG): container finished" podID="438e3417-67a9-417c-9e75-d0e207ab1812" containerID="0ff3dd8901a960fc64101e349f14f625d265537feec7a85d08ecad6a7f58dcfd" exitCode=0 Feb 18 20:18:59 crc kubenswrapper[4932]: I0218 20:18:59.380372 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" event={"ID":"438e3417-67a9-417c-9e75-d0e207ab1812","Type":"ContainerDied","Data":"0ff3dd8901a960fc64101e349f14f625d265537feec7a85d08ecad6a7f58dcfd"} Feb 18 20:19:00 crc kubenswrapper[4932]: I0218 20:19:00.892196 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:19:00 crc kubenswrapper[4932]: I0218 20:19:00.981297 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-1\") pod \"438e3417-67a9-417c-9e75-d0e207ab1812\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " Feb 18 20:19:00 crc kubenswrapper[4932]: I0218 20:19:00.981337 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-0\") pod \"438e3417-67a9-417c-9e75-d0e207ab1812\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " Feb 18 20:19:00 crc kubenswrapper[4932]: I0218 20:19:00.981404 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-2\") pod \"438e3417-67a9-417c-9e75-d0e207ab1812\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " Feb 18 20:19:00 crc kubenswrapper[4932]: I0218 20:19:00.981522 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ssh-key-openstack-edpm-ipam\") pod \"438e3417-67a9-417c-9e75-d0e207ab1812\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " Feb 18 20:19:00 crc kubenswrapper[4932]: I0218 20:19:00.981583 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-inventory\") pod \"438e3417-67a9-417c-9e75-d0e207ab1812\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " Feb 18 20:19:00 crc kubenswrapper[4932]: I0218 20:19:00.981681 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-telemetry-combined-ca-bundle\") pod \"438e3417-67a9-417c-9e75-d0e207ab1812\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " Feb 18 20:19:00 crc kubenswrapper[4932]: I0218 20:19:00.981707 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbg4q\" (UniqueName: \"kubernetes.io/projected/438e3417-67a9-417c-9e75-d0e207ab1812-kube-api-access-nbg4q\") pod \"438e3417-67a9-417c-9e75-d0e207ab1812\" (UID: \"438e3417-67a9-417c-9e75-d0e207ab1812\") " Feb 18 20:19:00 crc kubenswrapper[4932]: I0218 20:19:00.987620 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/438e3417-67a9-417c-9e75-d0e207ab1812-kube-api-access-nbg4q" (OuterVolumeSpecName: "kube-api-access-nbg4q") pod "438e3417-67a9-417c-9e75-d0e207ab1812" (UID: "438e3417-67a9-417c-9e75-d0e207ab1812"). InnerVolumeSpecName "kube-api-access-nbg4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:19:00 crc kubenswrapper[4932]: I0218 20:19:00.990486 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "438e3417-67a9-417c-9e75-d0e207ab1812" (UID: "438e3417-67a9-417c-9e75-d0e207ab1812"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:19:01 crc kubenswrapper[4932]: I0218 20:19:01.013315 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-inventory" (OuterVolumeSpecName: "inventory") pod "438e3417-67a9-417c-9e75-d0e207ab1812" (UID: "438e3417-67a9-417c-9e75-d0e207ab1812"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:19:01 crc kubenswrapper[4932]: I0218 20:19:01.015072 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "438e3417-67a9-417c-9e75-d0e207ab1812" (UID: "438e3417-67a9-417c-9e75-d0e207ab1812"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:19:01 crc kubenswrapper[4932]: I0218 20:19:01.024632 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "438e3417-67a9-417c-9e75-d0e207ab1812" (UID: "438e3417-67a9-417c-9e75-d0e207ab1812"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:19:01 crc kubenswrapper[4932]: I0218 20:19:01.029593 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "438e3417-67a9-417c-9e75-d0e207ab1812" (UID: "438e3417-67a9-417c-9e75-d0e207ab1812"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:19:01 crc kubenswrapper[4932]: I0218 20:19:01.041334 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "438e3417-67a9-417c-9e75-d0e207ab1812" (UID: "438e3417-67a9-417c-9e75-d0e207ab1812"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:19:01 crc kubenswrapper[4932]: I0218 20:19:01.084662 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:19:01 crc kubenswrapper[4932]: I0218 20:19:01.084719 4932 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:19:01 crc kubenswrapper[4932]: I0218 20:19:01.084739 4932 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:19:01 crc kubenswrapper[4932]: I0218 20:19:01.084760 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbg4q\" (UniqueName: \"kubernetes.io/projected/438e3417-67a9-417c-9e75-d0e207ab1812-kube-api-access-nbg4q\") on node \"crc\" DevicePath \"\"" Feb 18 20:19:01 crc kubenswrapper[4932]: I0218 20:19:01.084780 4932 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 18 20:19:01 crc kubenswrapper[4932]: I0218 20:19:01.084800 4932 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:19:01 crc kubenswrapper[4932]: I0218 20:19:01.084821 4932 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/438e3417-67a9-417c-9e75-d0e207ab1812-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 18 20:19:01 crc kubenswrapper[4932]: I0218 20:19:01.406474 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" event={"ID":"438e3417-67a9-417c-9e75-d0e207ab1812","Type":"ContainerDied","Data":"9fedb3dcc828bf71bbbf9cbf690773fa436b77d77990e905ec72e0916360e9dd"} Feb 18 20:19:01 crc kubenswrapper[4932]: I0218 20:19:01.406552 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fedb3dcc828bf71bbbf9cbf690773fa436b77d77990e905ec72e0916360e9dd" Feb 18 20:19:01 crc kubenswrapper[4932]: I0218 20:19:01.406657 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b2mhv" Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.606613 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.607289 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.834716 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x4fhd"] Feb 18 20:19:27 crc kubenswrapper[4932]: E0218 20:19:27.835307 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="438e3417-67a9-417c-9e75-d0e207ab1812" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.835333 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="438e3417-67a9-417c-9e75-d0e207ab1812" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 20:19:27 crc kubenswrapper[4932]: E0218 20:19:27.835356 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bccb4e09-25d0-498e-92d0-dac8572db926" containerName="registry-server" Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.835365 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="bccb4e09-25d0-498e-92d0-dac8572db926" containerName="registry-server" Feb 18 20:19:27 crc kubenswrapper[4932]: E0218 20:19:27.835410 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bccb4e09-25d0-498e-92d0-dac8572db926" containerName="extract-utilities" Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.835420 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="bccb4e09-25d0-498e-92d0-dac8572db926" containerName="extract-utilities" Feb 18 20:19:27 crc kubenswrapper[4932]: E0218 20:19:27.835432 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bccb4e09-25d0-498e-92d0-dac8572db926" containerName="extract-content" Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.835440 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="bccb4e09-25d0-498e-92d0-dac8572db926" containerName="extract-content" Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.835682 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="438e3417-67a9-417c-9e75-d0e207ab1812" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.835702 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="bccb4e09-25d0-498e-92d0-dac8572db926" containerName="registry-server" Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.837207 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4fhd" Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.857649 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x4fhd"] Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.880220 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tl7q\" (UniqueName: \"kubernetes.io/projected/9c726726-9ae9-4956-9999-09c956029615-kube-api-access-9tl7q\") pod \"redhat-operators-x4fhd\" (UID: \"9c726726-9ae9-4956-9999-09c956029615\") " pod="openshift-marketplace/redhat-operators-x4fhd" Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.880300 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c726726-9ae9-4956-9999-09c956029615-utilities\") pod \"redhat-operators-x4fhd\" (UID: \"9c726726-9ae9-4956-9999-09c956029615\") " pod="openshift-marketplace/redhat-operators-x4fhd" Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.881838 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c726726-9ae9-4956-9999-09c956029615-catalog-content\") pod \"redhat-operators-x4fhd\" (UID: \"9c726726-9ae9-4956-9999-09c956029615\") " pod="openshift-marketplace/redhat-operators-x4fhd" Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.984256 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c726726-9ae9-4956-9999-09c956029615-catalog-content\") pod \"redhat-operators-x4fhd\" (UID: \"9c726726-9ae9-4956-9999-09c956029615\") " pod="openshift-marketplace/redhat-operators-x4fhd" Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.984339 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tl7q\" (UniqueName: \"kubernetes.io/projected/9c726726-9ae9-4956-9999-09c956029615-kube-api-access-9tl7q\") pod \"redhat-operators-x4fhd\" (UID: \"9c726726-9ae9-4956-9999-09c956029615\") " pod="openshift-marketplace/redhat-operators-x4fhd" Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.984373 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c726726-9ae9-4956-9999-09c956029615-utilities\") pod \"redhat-operators-x4fhd\" (UID: \"9c726726-9ae9-4956-9999-09c956029615\") " pod="openshift-marketplace/redhat-operators-x4fhd" Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.984875 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c726726-9ae9-4956-9999-09c956029615-catalog-content\") pod \"redhat-operators-x4fhd\" (UID: \"9c726726-9ae9-4956-9999-09c956029615\") " pod="openshift-marketplace/redhat-operators-x4fhd" Feb 18 20:19:27 crc kubenswrapper[4932]: I0218 20:19:27.985336 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c726726-9ae9-4956-9999-09c956029615-utilities\") pod \"redhat-operators-x4fhd\" (UID: \"9c726726-9ae9-4956-9999-09c956029615\") " pod="openshift-marketplace/redhat-operators-x4fhd" Feb 18 20:19:28 crc kubenswrapper[4932]: I0218 20:19:28.004300 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tl7q\" (UniqueName: \"kubernetes.io/projected/9c726726-9ae9-4956-9999-09c956029615-kube-api-access-9tl7q\") pod \"redhat-operators-x4fhd\" (UID: \"9c726726-9ae9-4956-9999-09c956029615\") " pod="openshift-marketplace/redhat-operators-x4fhd" Feb 18 20:19:28 crc kubenswrapper[4932]: I0218 20:19:28.181451 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4fhd" Feb 18 20:19:28 crc kubenswrapper[4932]: I0218 20:19:28.774131 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x4fhd"] Feb 18 20:19:29 crc kubenswrapper[4932]: I0218 20:19:29.729063 4932 generic.go:334] "Generic (PLEG): container finished" podID="9c726726-9ae9-4956-9999-09c956029615" containerID="d84e8cc527cc17b9aced3793146dcdc81638d7d698d4258b2a58f919fbe804cc" exitCode=0 Feb 18 20:19:29 crc kubenswrapper[4932]: I0218 20:19:29.729132 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4fhd" event={"ID":"9c726726-9ae9-4956-9999-09c956029615","Type":"ContainerDied","Data":"d84e8cc527cc17b9aced3793146dcdc81638d7d698d4258b2a58f919fbe804cc"} Feb 18 20:19:29 crc kubenswrapper[4932]: I0218 20:19:29.729834 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4fhd" event={"ID":"9c726726-9ae9-4956-9999-09c956029615","Type":"ContainerStarted","Data":"e093f1cca327bf041c93f56c487c61e59e1e403678b96164f3bbb1c6097b672a"} Feb 18 20:19:29 crc kubenswrapper[4932]: I0218 20:19:29.731861 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:19:31 crc kubenswrapper[4932]: I0218 20:19:31.765420 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4fhd" event={"ID":"9c726726-9ae9-4956-9999-09c956029615","Type":"ContainerStarted","Data":"c34916e76b220bbc9b7adb905e2b92b701443ef539041d81b48bf956f1c2fb1f"} Feb 18 20:19:34 crc kubenswrapper[4932]: I0218 20:19:34.794168 4932 generic.go:334] "Generic (PLEG): container finished" podID="9c726726-9ae9-4956-9999-09c956029615" containerID="c34916e76b220bbc9b7adb905e2b92b701443ef539041d81b48bf956f1c2fb1f" exitCode=0 Feb 18 20:19:34 crc kubenswrapper[4932]: I0218 20:19:34.794213 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4fhd" event={"ID":"9c726726-9ae9-4956-9999-09c956029615","Type":"ContainerDied","Data":"c34916e76b220bbc9b7adb905e2b92b701443ef539041d81b48bf956f1c2fb1f"} Feb 18 20:19:35 crc kubenswrapper[4932]: I0218 20:19:35.808559 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4fhd" event={"ID":"9c726726-9ae9-4956-9999-09c956029615","Type":"ContainerStarted","Data":"ecc41c6de44b323b6a9eb516b14711e5431dc2b7a5271228b87b570b3298928a"} Feb 18 20:19:35 crc kubenswrapper[4932]: I0218 20:19:35.835617 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x4fhd" podStartSLOduration=3.340472521 podStartE2EDuration="8.835597975s" podCreationTimestamp="2026-02-18 20:19:27 +0000 UTC" firstStartedPulling="2026-02-18 20:19:29.731595861 +0000 UTC m=+2733.313550706" lastFinishedPulling="2026-02-18 20:19:35.226721315 +0000 UTC m=+2738.808676160" observedRunningTime="2026-02-18 20:19:35.829677619 +0000 UTC m=+2739.411632464" watchObservedRunningTime="2026-02-18 20:19:35.835597975 +0000 UTC m=+2739.417552820" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.091407 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.093914 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.095569 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.107445 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.176057 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.176101 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.176126 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vvp8\" (UniqueName: \"kubernetes.io/projected/80782f4b-1aed-46fc-9400-896d1a9d02f7-kube-api-access-6vvp8\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.176151 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.176411 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.176520 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80782f4b-1aed-46fc-9400-896d1a9d02f7-scripts\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.176552 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-sys\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.176615 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-dev\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.176734 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-etc-nvme\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.176787 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.176813 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80782f4b-1aed-46fc-9400-896d1a9d02f7-config-data-custom\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.176855 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80782f4b-1aed-46fc-9400-896d1a9d02f7-config-data\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.176900 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-run\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.177064 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-lib-modules\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.177102 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80782f4b-1aed-46fc-9400-896d1a9d02f7-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.206073 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-0"] Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.208145 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.210462 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-config-data" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.216813 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.244056 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.245973 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.247962 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-2-config-data" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.272181 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.279380 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.279484 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-run\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.279541 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.279584 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h488t\" (UniqueName: \"kubernetes.io/projected/454e91b9-5fe5-445a-ae9d-372899613515-kube-api-access-h488t\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.279609 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-sys\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.279628 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.279654 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.279686 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/454e91b9-5fe5-445a-ae9d-372899613515-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.279725 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-lib-modules\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.279759 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80782f4b-1aed-46fc-9400-896d1a9d02f7-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.279807 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-lib-modules\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.279843 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.279874 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74b2e4dc-d3d9-4aa4-8255-6c174d925528-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.279913 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.279942 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.279968 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.279991 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74b2e4dc-d3d9-4aa4-8255-6c174d925528-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280016 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280216 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vvp8\" (UniqueName: \"kubernetes.io/projected/80782f4b-1aed-46fc-9400-896d1a9d02f7-kube-api-access-6vvp8\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280277 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280306 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280349 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280378 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74b2e4dc-d3d9-4aa4-8255-6c174d925528-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280416 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280447 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280457 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280502 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcfn6\" (UniqueName: \"kubernetes.io/projected/74b2e4dc-d3d9-4aa4-8255-6c174d925528-kube-api-access-fcfn6\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280541 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280576 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280598 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/454e91b9-5fe5-445a-ae9d-372899613515-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280621 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280644 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280660 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80782f4b-1aed-46fc-9400-896d1a9d02f7-scripts\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280693 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-sys\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280719 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/454e91b9-5fe5-445a-ae9d-372899613515-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280744 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-dev\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280776 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-dev\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280801 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-sys\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280856 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280860 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-dev\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280911 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280958 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-etc-nvme\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.280989 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.281012 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80782f4b-1aed-46fc-9400-896d1a9d02f7-config-data-custom\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.281033 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.281072 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80782f4b-1aed-46fc-9400-896d1a9d02f7-config-data\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.281103 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.281125 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/454e91b9-5fe5-445a-ae9d-372899613515-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.281160 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74b2e4dc-d3d9-4aa4-8255-6c174d925528-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.281204 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.281250 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-run\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.281273 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.281368 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.281620 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-run\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.281907 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/80782f4b-1aed-46fc-9400-896d1a9d02f7-etc-nvme\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.292593 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80782f4b-1aed-46fc-9400-896d1a9d02f7-config-data-custom\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.293428 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80782f4b-1aed-46fc-9400-896d1a9d02f7-scripts\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.300266 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80782f4b-1aed-46fc-9400-896d1a9d02f7-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.300378 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80782f4b-1aed-46fc-9400-896d1a9d02f7-config-data\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.304830 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vvp8\" (UniqueName: \"kubernetes.io/projected/80782f4b-1aed-46fc-9400-896d1a9d02f7-kube-api-access-6vvp8\") pod \"cinder-backup-0\" (UID: \"80782f4b-1aed-46fc-9400-896d1a9d02f7\") " pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.382817 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h488t\" (UniqueName: \"kubernetes.io/projected/454e91b9-5fe5-445a-ae9d-372899613515-kube-api-access-h488t\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.382870 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-sys\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.382887 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.382912 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.382939 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/454e91b9-5fe5-445a-ae9d-372899613515-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.382985 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-sys\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383050 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383059 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.382999 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383078 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383229 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74b2e4dc-d3d9-4aa4-8255-6c174d925528-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383309 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383331 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74b2e4dc-d3d9-4aa4-8255-6c174d925528-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383347 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383409 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383410 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383458 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383477 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74b2e4dc-d3d9-4aa4-8255-6c174d925528-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383535 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383469 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383569 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383615 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383629 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383706 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcfn6\" (UniqueName: \"kubernetes.io/projected/74b2e4dc-d3d9-4aa4-8255-6c174d925528-kube-api-access-fcfn6\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383763 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383785 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/454e91b9-5fe5-445a-ae9d-372899613515-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383817 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383880 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/454e91b9-5fe5-445a-ae9d-372899613515-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383921 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-dev\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383962 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.383999 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384068 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384108 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384128 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/454e91b9-5fe5-445a-ae9d-372899613515-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384167 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74b2e4dc-d3d9-4aa4-8255-6c174d925528-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384277 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384327 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384349 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384360 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-dev\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384393 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384401 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-run\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384427 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-run\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384450 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384470 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384480 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384481 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384594 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/74b2e4dc-d3d9-4aa4-8255-6c174d925528-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384640 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384684 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.384822 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/454e91b9-5fe5-445a-ae9d-372899613515-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.388568 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74b2e4dc-d3d9-4aa4-8255-6c174d925528-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.388630 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74b2e4dc-d3d9-4aa4-8255-6c174d925528-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.389163 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/454e91b9-5fe5-445a-ae9d-372899613515-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.389378 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/454e91b9-5fe5-445a-ae9d-372899613515-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.389981 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/454e91b9-5fe5-445a-ae9d-372899613515-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.391719 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74b2e4dc-d3d9-4aa4-8255-6c174d925528-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.393929 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/454e91b9-5fe5-445a-ae9d-372899613515-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.394012 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74b2e4dc-d3d9-4aa4-8255-6c174d925528-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.401036 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcfn6\" (UniqueName: \"kubernetes.io/projected/74b2e4dc-d3d9-4aa4-8255-6c174d925528-kube-api-access-fcfn6\") pod \"cinder-volume-nfs-0\" (UID: \"74b2e4dc-d3d9-4aa4-8255-6c174d925528\") " pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.409039 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h488t\" (UniqueName: \"kubernetes.io/projected/454e91b9-5fe5-445a-ae9d-372899613515-kube-api-access-h488t\") pod \"cinder-volume-nfs-2-0\" (UID: \"454e91b9-5fe5-445a-ae9d-372899613515\") " pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.415898 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.527668 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:36 crc kubenswrapper[4932]: I0218 20:19:36.571645 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:37 crc kubenswrapper[4932]: I0218 20:19:37.102798 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 18 20:19:37 crc kubenswrapper[4932]: W0218 20:19:37.107662 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80782f4b_1aed_46fc_9400_896d1a9d02f7.slice/crio-7600579eb06dbf3a54ed89c97273a883fc381368ad7b22d6b7c5503b013da031 WatchSource:0}: Error finding container 7600579eb06dbf3a54ed89c97273a883fc381368ad7b22d6b7c5503b013da031: Status 404 returned error can't find the container with id 7600579eb06dbf3a54ed89c97273a883fc381368ad7b22d6b7c5503b013da031 Feb 18 20:19:37 crc kubenswrapper[4932]: I0218 20:19:37.217096 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Feb 18 20:19:37 crc kubenswrapper[4932]: I0218 20:19:37.459349 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Feb 18 20:19:37 crc kubenswrapper[4932]: W0218 20:19:37.537303 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod454e91b9_5fe5_445a_ae9d_372899613515.slice/crio-bc5455a3dd4da42eccaa14fb360b24aa5ca0d2f5bf331702de4f093df582627e WatchSource:0}: Error finding container bc5455a3dd4da42eccaa14fb360b24aa5ca0d2f5bf331702de4f093df582627e: Status 404 returned error can't find the container with id bc5455a3dd4da42eccaa14fb360b24aa5ca0d2f5bf331702de4f093df582627e Feb 18 20:19:37 crc kubenswrapper[4932]: I0218 20:19:37.842209 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"454e91b9-5fe5-445a-ae9d-372899613515","Type":"ContainerStarted","Data":"b95971177658ba52d642e7238e4ecee980abb72853fc31ea24786cacecafdc5d"} Feb 18 20:19:37 crc kubenswrapper[4932]: I0218 20:19:37.842447 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"454e91b9-5fe5-445a-ae9d-372899613515","Type":"ContainerStarted","Data":"bc5455a3dd4da42eccaa14fb360b24aa5ca0d2f5bf331702de4f093df582627e"} Feb 18 20:19:37 crc kubenswrapper[4932]: I0218 20:19:37.844334 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"74b2e4dc-d3d9-4aa4-8255-6c174d925528","Type":"ContainerStarted","Data":"e8531dd854d16aa76856d0d54c9e186cb1fa0a9db5a075b1d3eab6ff2c38e1f1"} Feb 18 20:19:37 crc kubenswrapper[4932]: I0218 20:19:37.844604 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"74b2e4dc-d3d9-4aa4-8255-6c174d925528","Type":"ContainerStarted","Data":"d420ab5610f4ea8287899e3183abf991cb38a3378a3b16a21774a88c82218f0c"} Feb 18 20:19:37 crc kubenswrapper[4932]: I0218 20:19:37.846594 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"80782f4b-1aed-46fc-9400-896d1a9d02f7","Type":"ContainerStarted","Data":"b951723a7b1973f640ad8f3ac8bb268f14dee1538c422983e04afa8b026c38aa"} Feb 18 20:19:37 crc kubenswrapper[4932]: I0218 20:19:37.846613 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"80782f4b-1aed-46fc-9400-896d1a9d02f7","Type":"ContainerStarted","Data":"7600579eb06dbf3a54ed89c97273a883fc381368ad7b22d6b7c5503b013da031"} Feb 18 20:19:38 crc kubenswrapper[4932]: I0218 20:19:38.182395 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x4fhd" Feb 18 20:19:38 crc kubenswrapper[4932]: I0218 20:19:38.182630 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x4fhd" Feb 18 20:19:38 crc kubenswrapper[4932]: I0218 20:19:38.856551 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"74b2e4dc-d3d9-4aa4-8255-6c174d925528","Type":"ContainerStarted","Data":"f26521e0869587a39b8f76ff63c77c5a07c355f949471d009ae4d05f01d5f49b"} Feb 18 20:19:38 crc kubenswrapper[4932]: I0218 20:19:38.858821 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"80782f4b-1aed-46fc-9400-896d1a9d02f7","Type":"ContainerStarted","Data":"7e7efa80400fba75484b6234daf43d2a360d3281c96eea0e23c099be7bc8fa7e"} Feb 18 20:19:38 crc kubenswrapper[4932]: I0218 20:19:38.860815 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"454e91b9-5fe5-445a-ae9d-372899613515","Type":"ContainerStarted","Data":"d146e8fc018c30c8e8d2da8c4f65b77944594efe79a7bb710ac1fcae182a389e"} Feb 18 20:19:38 crc kubenswrapper[4932]: I0218 20:19:38.885580 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-0" podStartSLOduration=2.664174417 podStartE2EDuration="2.885561532s" podCreationTimestamp="2026-02-18 20:19:36 +0000 UTC" firstStartedPulling="2026-02-18 20:19:37.361364894 +0000 UTC m=+2740.943319739" lastFinishedPulling="2026-02-18 20:19:37.582752009 +0000 UTC m=+2741.164706854" observedRunningTime="2026-02-18 20:19:38.877443721 +0000 UTC m=+2742.459398566" watchObservedRunningTime="2026-02-18 20:19:38.885561532 +0000 UTC m=+2742.467516377" Feb 18 20:19:38 crc kubenswrapper[4932]: I0218 20:19:38.902280 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=2.648682123 podStartE2EDuration="2.902258665s" podCreationTimestamp="2026-02-18 20:19:36 +0000 UTC" firstStartedPulling="2026-02-18 20:19:37.110748135 +0000 UTC m=+2740.692702980" lastFinishedPulling="2026-02-18 20:19:37.364324677 +0000 UTC m=+2740.946279522" observedRunningTime="2026-02-18 20:19:38.897397535 +0000 UTC m=+2742.479352380" watchObservedRunningTime="2026-02-18 20:19:38.902258665 +0000 UTC m=+2742.484213510" Feb 18 20:19:38 crc kubenswrapper[4932]: I0218 20:19:38.921480 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-2-0" podStartSLOduration=2.858006451 podStartE2EDuration="2.92146169s" podCreationTimestamp="2026-02-18 20:19:36 +0000 UTC" firstStartedPulling="2026-02-18 20:19:37.551309482 +0000 UTC m=+2741.133264327" lastFinishedPulling="2026-02-18 20:19:37.614764721 +0000 UTC m=+2741.196719566" observedRunningTime="2026-02-18 20:19:38.917587694 +0000 UTC m=+2742.499542559" watchObservedRunningTime="2026-02-18 20:19:38.92146169 +0000 UTC m=+2742.503416525" Feb 18 20:19:39 crc kubenswrapper[4932]: I0218 20:19:39.236454 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x4fhd" podUID="9c726726-9ae9-4956-9999-09c956029615" containerName="registry-server" probeResult="failure" output=< Feb 18 20:19:39 crc kubenswrapper[4932]: timeout: failed to connect service ":50051" within 1s Feb 18 20:19:39 crc kubenswrapper[4932]: > Feb 18 20:19:41 crc kubenswrapper[4932]: I0218 20:19:41.416920 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Feb 18 20:19:41 crc kubenswrapper[4932]: I0218 20:19:41.528737 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:41 crc kubenswrapper[4932]: I0218 20:19:41.572588 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:42 crc kubenswrapper[4932]: I0218 20:19:42.075069 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hwssw"] Feb 18 20:19:42 crc kubenswrapper[4932]: I0218 20:19:42.079248 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hwssw" Feb 18 20:19:42 crc kubenswrapper[4932]: I0218 20:19:42.092042 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hwssw"] Feb 18 20:19:42 crc kubenswrapper[4932]: I0218 20:19:42.134102 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98eb30d5-e437-4090-a44b-84245137fb3c-utilities\") pod \"community-operators-hwssw\" (UID: \"98eb30d5-e437-4090-a44b-84245137fb3c\") " pod="openshift-marketplace/community-operators-hwssw" Feb 18 20:19:42 crc kubenswrapper[4932]: I0218 20:19:42.134144 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slpdf\" (UniqueName: \"kubernetes.io/projected/98eb30d5-e437-4090-a44b-84245137fb3c-kube-api-access-slpdf\") pod \"community-operators-hwssw\" (UID: \"98eb30d5-e437-4090-a44b-84245137fb3c\") " pod="openshift-marketplace/community-operators-hwssw" Feb 18 20:19:42 crc kubenswrapper[4932]: I0218 20:19:42.134220 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98eb30d5-e437-4090-a44b-84245137fb3c-catalog-content\") pod \"community-operators-hwssw\" (UID: \"98eb30d5-e437-4090-a44b-84245137fb3c\") " pod="openshift-marketplace/community-operators-hwssw" Feb 18 20:19:42 crc kubenswrapper[4932]: I0218 20:19:42.239479 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98eb30d5-e437-4090-a44b-84245137fb3c-catalog-content\") pod \"community-operators-hwssw\" (UID: \"98eb30d5-e437-4090-a44b-84245137fb3c\") " pod="openshift-marketplace/community-operators-hwssw" Feb 18 20:19:42 crc kubenswrapper[4932]: I0218 20:19:42.239752 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98eb30d5-e437-4090-a44b-84245137fb3c-utilities\") pod \"community-operators-hwssw\" (UID: \"98eb30d5-e437-4090-a44b-84245137fb3c\") " pod="openshift-marketplace/community-operators-hwssw" Feb 18 20:19:42 crc kubenswrapper[4932]: I0218 20:19:42.239786 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slpdf\" (UniqueName: \"kubernetes.io/projected/98eb30d5-e437-4090-a44b-84245137fb3c-kube-api-access-slpdf\") pod \"community-operators-hwssw\" (UID: \"98eb30d5-e437-4090-a44b-84245137fb3c\") " pod="openshift-marketplace/community-operators-hwssw" Feb 18 20:19:42 crc kubenswrapper[4932]: I0218 20:19:42.241247 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98eb30d5-e437-4090-a44b-84245137fb3c-catalog-content\") pod \"community-operators-hwssw\" (UID: \"98eb30d5-e437-4090-a44b-84245137fb3c\") " pod="openshift-marketplace/community-operators-hwssw" Feb 18 20:19:42 crc kubenswrapper[4932]: I0218 20:19:42.242830 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98eb30d5-e437-4090-a44b-84245137fb3c-utilities\") pod \"community-operators-hwssw\" (UID: \"98eb30d5-e437-4090-a44b-84245137fb3c\") " pod="openshift-marketplace/community-operators-hwssw" Feb 18 20:19:42 crc kubenswrapper[4932]: I0218 20:19:42.275519 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slpdf\" (UniqueName: \"kubernetes.io/projected/98eb30d5-e437-4090-a44b-84245137fb3c-kube-api-access-slpdf\") pod \"community-operators-hwssw\" (UID: \"98eb30d5-e437-4090-a44b-84245137fb3c\") " pod="openshift-marketplace/community-operators-hwssw" Feb 18 20:19:42 crc kubenswrapper[4932]: I0218 20:19:42.447246 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hwssw" Feb 18 20:19:42 crc kubenswrapper[4932]: I0218 20:19:42.880504 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hwssw"] Feb 18 20:19:43 crc kubenswrapper[4932]: I0218 20:19:43.909815 4932 generic.go:334] "Generic (PLEG): container finished" podID="98eb30d5-e437-4090-a44b-84245137fb3c" containerID="864448a392c5d44b280c1229ff46cb1995bf791703da11de27ea2bbaf194b3c1" exitCode=0 Feb 18 20:19:43 crc kubenswrapper[4932]: I0218 20:19:43.909907 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hwssw" event={"ID":"98eb30d5-e437-4090-a44b-84245137fb3c","Type":"ContainerDied","Data":"864448a392c5d44b280c1229ff46cb1995bf791703da11de27ea2bbaf194b3c1"} Feb 18 20:19:43 crc kubenswrapper[4932]: I0218 20:19:43.910408 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hwssw" event={"ID":"98eb30d5-e437-4090-a44b-84245137fb3c","Type":"ContainerStarted","Data":"91192c666182a564538f31414848540c74e1395813e0c0300c2862c80cb37cb2"} Feb 18 20:19:44 crc kubenswrapper[4932]: I0218 20:19:44.920247 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hwssw" event={"ID":"98eb30d5-e437-4090-a44b-84245137fb3c","Type":"ContainerStarted","Data":"0f4bfbcb38aeb85f918c1e61c20c7b0e265851ee1a173a68a4ab89079cd5cfbc"} Feb 18 20:19:45 crc kubenswrapper[4932]: I0218 20:19:45.941417 4932 generic.go:334] "Generic (PLEG): container finished" podID="98eb30d5-e437-4090-a44b-84245137fb3c" containerID="0f4bfbcb38aeb85f918c1e61c20c7b0e265851ee1a173a68a4ab89079cd5cfbc" exitCode=0 Feb 18 20:19:45 crc kubenswrapper[4932]: I0218 20:19:45.941470 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hwssw" event={"ID":"98eb30d5-e437-4090-a44b-84245137fb3c","Type":"ContainerDied","Data":"0f4bfbcb38aeb85f918c1e61c20c7b0e265851ee1a173a68a4ab89079cd5cfbc"} Feb 18 20:19:46 crc kubenswrapper[4932]: I0218 20:19:46.590885 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Feb 18 20:19:46 crc kubenswrapper[4932]: I0218 20:19:46.740225 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-0" Feb 18 20:19:46 crc kubenswrapper[4932]: I0218 20:19:46.903416 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-2-0" Feb 18 20:19:47 crc kubenswrapper[4932]: I0218 20:19:47.969842 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hwssw" event={"ID":"98eb30d5-e437-4090-a44b-84245137fb3c","Type":"ContainerStarted","Data":"50d9e77330f9b9515cc0ca067fca4306fd6744d62ab4baad817f5ba57ab45736"} Feb 18 20:19:47 crc kubenswrapper[4932]: I0218 20:19:47.992148 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hwssw" podStartSLOduration=2.545072834 podStartE2EDuration="5.992125633s" podCreationTimestamp="2026-02-18 20:19:42 +0000 UTC" firstStartedPulling="2026-02-18 20:19:43.913026831 +0000 UTC m=+2747.494981676" lastFinishedPulling="2026-02-18 20:19:47.36007963 +0000 UTC m=+2750.942034475" observedRunningTime="2026-02-18 20:19:47.988497033 +0000 UTC m=+2751.570451878" watchObservedRunningTime="2026-02-18 20:19:47.992125633 +0000 UTC m=+2751.574080498" Feb 18 20:19:49 crc kubenswrapper[4932]: I0218 20:19:49.231895 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x4fhd" podUID="9c726726-9ae9-4956-9999-09c956029615" containerName="registry-server" probeResult="failure" output=< Feb 18 20:19:49 crc kubenswrapper[4932]: timeout: failed to connect service ":50051" within 1s Feb 18 20:19:49 crc kubenswrapper[4932]: > Feb 18 20:19:52 crc kubenswrapper[4932]: I0218 20:19:52.447756 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hwssw" Feb 18 20:19:52 crc kubenswrapper[4932]: I0218 20:19:52.448816 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hwssw" Feb 18 20:19:52 crc kubenswrapper[4932]: I0218 20:19:52.504970 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hwssw" Feb 18 20:19:53 crc kubenswrapper[4932]: I0218 20:19:53.086585 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hwssw" Feb 18 20:19:53 crc kubenswrapper[4932]: I0218 20:19:53.136038 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hwssw"] Feb 18 20:19:55 crc kubenswrapper[4932]: I0218 20:19:55.058616 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hwssw" podUID="98eb30d5-e437-4090-a44b-84245137fb3c" containerName="registry-server" containerID="cri-o://50d9e77330f9b9515cc0ca067fca4306fd6744d62ab4baad817f5ba57ab45736" gracePeriod=2 Feb 18 20:19:55 crc kubenswrapper[4932]: I0218 20:19:55.612385 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hwssw" Feb 18 20:19:55 crc kubenswrapper[4932]: I0218 20:19:55.715530 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slpdf\" (UniqueName: \"kubernetes.io/projected/98eb30d5-e437-4090-a44b-84245137fb3c-kube-api-access-slpdf\") pod \"98eb30d5-e437-4090-a44b-84245137fb3c\" (UID: \"98eb30d5-e437-4090-a44b-84245137fb3c\") " Feb 18 20:19:55 crc kubenswrapper[4932]: I0218 20:19:55.715628 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98eb30d5-e437-4090-a44b-84245137fb3c-utilities\") pod \"98eb30d5-e437-4090-a44b-84245137fb3c\" (UID: \"98eb30d5-e437-4090-a44b-84245137fb3c\") " Feb 18 20:19:55 crc kubenswrapper[4932]: I0218 20:19:55.715657 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98eb30d5-e437-4090-a44b-84245137fb3c-catalog-content\") pod \"98eb30d5-e437-4090-a44b-84245137fb3c\" (UID: \"98eb30d5-e437-4090-a44b-84245137fb3c\") " Feb 18 20:19:55 crc kubenswrapper[4932]: I0218 20:19:55.716895 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98eb30d5-e437-4090-a44b-84245137fb3c-utilities" (OuterVolumeSpecName: "utilities") pod "98eb30d5-e437-4090-a44b-84245137fb3c" (UID: "98eb30d5-e437-4090-a44b-84245137fb3c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:19:55 crc kubenswrapper[4932]: I0218 20:19:55.722860 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98eb30d5-e437-4090-a44b-84245137fb3c-kube-api-access-slpdf" (OuterVolumeSpecName: "kube-api-access-slpdf") pod "98eb30d5-e437-4090-a44b-84245137fb3c" (UID: "98eb30d5-e437-4090-a44b-84245137fb3c"). InnerVolumeSpecName "kube-api-access-slpdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:19:55 crc kubenswrapper[4932]: I0218 20:19:55.776293 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98eb30d5-e437-4090-a44b-84245137fb3c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "98eb30d5-e437-4090-a44b-84245137fb3c" (UID: "98eb30d5-e437-4090-a44b-84245137fb3c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:19:55 crc kubenswrapper[4932]: I0218 20:19:55.817711 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slpdf\" (UniqueName: \"kubernetes.io/projected/98eb30d5-e437-4090-a44b-84245137fb3c-kube-api-access-slpdf\") on node \"crc\" DevicePath \"\"" Feb 18 20:19:55 crc kubenswrapper[4932]: I0218 20:19:55.817741 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98eb30d5-e437-4090-a44b-84245137fb3c-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:19:55 crc kubenswrapper[4932]: I0218 20:19:55.817750 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98eb30d5-e437-4090-a44b-84245137fb3c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:19:56 crc kubenswrapper[4932]: I0218 20:19:56.069333 4932 generic.go:334] "Generic (PLEG): container finished" podID="98eb30d5-e437-4090-a44b-84245137fb3c" containerID="50d9e77330f9b9515cc0ca067fca4306fd6744d62ab4baad817f5ba57ab45736" exitCode=0 Feb 18 20:19:56 crc kubenswrapper[4932]: I0218 20:19:56.069422 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hwssw" Feb 18 20:19:56 crc kubenswrapper[4932]: I0218 20:19:56.069415 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hwssw" event={"ID":"98eb30d5-e437-4090-a44b-84245137fb3c","Type":"ContainerDied","Data":"50d9e77330f9b9515cc0ca067fca4306fd6744d62ab4baad817f5ba57ab45736"} Feb 18 20:19:56 crc kubenswrapper[4932]: I0218 20:19:56.070526 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hwssw" event={"ID":"98eb30d5-e437-4090-a44b-84245137fb3c","Type":"ContainerDied","Data":"91192c666182a564538f31414848540c74e1395813e0c0300c2862c80cb37cb2"} Feb 18 20:19:56 crc kubenswrapper[4932]: I0218 20:19:56.070653 4932 scope.go:117] "RemoveContainer" containerID="50d9e77330f9b9515cc0ca067fca4306fd6744d62ab4baad817f5ba57ab45736" Feb 18 20:19:56 crc kubenswrapper[4932]: I0218 20:19:56.098276 4932 scope.go:117] "RemoveContainer" containerID="0f4bfbcb38aeb85f918c1e61c20c7b0e265851ee1a173a68a4ab89079cd5cfbc" Feb 18 20:19:56 crc kubenswrapper[4932]: I0218 20:19:56.105371 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hwssw"] Feb 18 20:19:56 crc kubenswrapper[4932]: I0218 20:19:56.116049 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hwssw"] Feb 18 20:19:56 crc kubenswrapper[4932]: I0218 20:19:56.122164 4932 scope.go:117] "RemoveContainer" containerID="864448a392c5d44b280c1229ff46cb1995bf791703da11de27ea2bbaf194b3c1" Feb 18 20:19:56 crc kubenswrapper[4932]: I0218 20:19:56.179814 4932 scope.go:117] "RemoveContainer" containerID="50d9e77330f9b9515cc0ca067fca4306fd6744d62ab4baad817f5ba57ab45736" Feb 18 20:19:56 crc kubenswrapper[4932]: E0218 20:19:56.180189 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50d9e77330f9b9515cc0ca067fca4306fd6744d62ab4baad817f5ba57ab45736\": container with ID starting with 50d9e77330f9b9515cc0ca067fca4306fd6744d62ab4baad817f5ba57ab45736 not found: ID does not exist" containerID="50d9e77330f9b9515cc0ca067fca4306fd6744d62ab4baad817f5ba57ab45736" Feb 18 20:19:56 crc kubenswrapper[4932]: I0218 20:19:56.180238 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50d9e77330f9b9515cc0ca067fca4306fd6744d62ab4baad817f5ba57ab45736"} err="failed to get container status \"50d9e77330f9b9515cc0ca067fca4306fd6744d62ab4baad817f5ba57ab45736\": rpc error: code = NotFound desc = could not find container \"50d9e77330f9b9515cc0ca067fca4306fd6744d62ab4baad817f5ba57ab45736\": container with ID starting with 50d9e77330f9b9515cc0ca067fca4306fd6744d62ab4baad817f5ba57ab45736 not found: ID does not exist" Feb 18 20:19:56 crc kubenswrapper[4932]: I0218 20:19:56.180271 4932 scope.go:117] "RemoveContainer" containerID="0f4bfbcb38aeb85f918c1e61c20c7b0e265851ee1a173a68a4ab89079cd5cfbc" Feb 18 20:19:56 crc kubenswrapper[4932]: E0218 20:19:56.180676 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f4bfbcb38aeb85f918c1e61c20c7b0e265851ee1a173a68a4ab89079cd5cfbc\": container with ID starting with 0f4bfbcb38aeb85f918c1e61c20c7b0e265851ee1a173a68a4ab89079cd5cfbc not found: ID does not exist" containerID="0f4bfbcb38aeb85f918c1e61c20c7b0e265851ee1a173a68a4ab89079cd5cfbc" Feb 18 20:19:56 crc kubenswrapper[4932]: I0218 20:19:56.180708 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f4bfbcb38aeb85f918c1e61c20c7b0e265851ee1a173a68a4ab89079cd5cfbc"} err="failed to get container status \"0f4bfbcb38aeb85f918c1e61c20c7b0e265851ee1a173a68a4ab89079cd5cfbc\": rpc error: code = NotFound desc = could not find container \"0f4bfbcb38aeb85f918c1e61c20c7b0e265851ee1a173a68a4ab89079cd5cfbc\": container with ID starting with 0f4bfbcb38aeb85f918c1e61c20c7b0e265851ee1a173a68a4ab89079cd5cfbc not found: ID does not exist" Feb 18 20:19:56 crc kubenswrapper[4932]: I0218 20:19:56.180732 4932 scope.go:117] "RemoveContainer" containerID="864448a392c5d44b280c1229ff46cb1995bf791703da11de27ea2bbaf194b3c1" Feb 18 20:19:56 crc kubenswrapper[4932]: E0218 20:19:56.181043 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"864448a392c5d44b280c1229ff46cb1995bf791703da11de27ea2bbaf194b3c1\": container with ID starting with 864448a392c5d44b280c1229ff46cb1995bf791703da11de27ea2bbaf194b3c1 not found: ID does not exist" containerID="864448a392c5d44b280c1229ff46cb1995bf791703da11de27ea2bbaf194b3c1" Feb 18 20:19:56 crc kubenswrapper[4932]: I0218 20:19:56.181070 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"864448a392c5d44b280c1229ff46cb1995bf791703da11de27ea2bbaf194b3c1"} err="failed to get container status \"864448a392c5d44b280c1229ff46cb1995bf791703da11de27ea2bbaf194b3c1\": rpc error: code = NotFound desc = could not find container \"864448a392c5d44b280c1229ff46cb1995bf791703da11de27ea2bbaf194b3c1\": container with ID starting with 864448a392c5d44b280c1229ff46cb1995bf791703da11de27ea2bbaf194b3c1 not found: ID does not exist" Feb 18 20:19:57 crc kubenswrapper[4932]: I0218 20:19:57.196962 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98eb30d5-e437-4090-a44b-84245137fb3c" path="/var/lib/kubelet/pods/98eb30d5-e437-4090-a44b-84245137fb3c/volumes" Feb 18 20:19:57 crc kubenswrapper[4932]: I0218 20:19:57.606578 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:19:57 crc kubenswrapper[4932]: I0218 20:19:57.606647 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:19:57 crc kubenswrapper[4932]: I0218 20:19:57.606700 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 20:19:57 crc kubenswrapper[4932]: I0218 20:19:57.607487 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0d641d1880a050cdf1021a445fa79e88f90ca1f340fe0f38bc6a038f7b103aec"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:19:57 crc kubenswrapper[4932]: I0218 20:19:57.607543 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://0d641d1880a050cdf1021a445fa79e88f90ca1f340fe0f38bc6a038f7b103aec" gracePeriod=600 Feb 18 20:19:58 crc kubenswrapper[4932]: I0218 20:19:58.102711 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="0d641d1880a050cdf1021a445fa79e88f90ca1f340fe0f38bc6a038f7b103aec" exitCode=0 Feb 18 20:19:58 crc kubenswrapper[4932]: I0218 20:19:58.102778 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"0d641d1880a050cdf1021a445fa79e88f90ca1f340fe0f38bc6a038f7b103aec"} Feb 18 20:19:58 crc kubenswrapper[4932]: I0218 20:19:58.103446 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72"} Feb 18 20:19:58 crc kubenswrapper[4932]: I0218 20:19:58.103468 4932 scope.go:117] "RemoveContainer" containerID="87ee69b3c9ae0715a5bb5f8279b2b5f5810507ea21063192d412162c1fdb294f" Feb 18 20:19:59 crc kubenswrapper[4932]: I0218 20:19:59.242407 4932 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x4fhd" podUID="9c726726-9ae9-4956-9999-09c956029615" containerName="registry-server" probeResult="failure" output=< Feb 18 20:19:59 crc kubenswrapper[4932]: timeout: failed to connect service ":50051" within 1s Feb 18 20:19:59 crc kubenswrapper[4932]: > Feb 18 20:20:08 crc kubenswrapper[4932]: I0218 20:20:08.260739 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x4fhd" Feb 18 20:20:08 crc kubenswrapper[4932]: I0218 20:20:08.346493 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x4fhd" Feb 18 20:20:09 crc kubenswrapper[4932]: I0218 20:20:09.535199 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x4fhd"] Feb 18 20:20:09 crc kubenswrapper[4932]: I0218 20:20:09.775547 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5g8jx"] Feb 18 20:20:09 crc kubenswrapper[4932]: E0218 20:20:09.776140 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98eb30d5-e437-4090-a44b-84245137fb3c" containerName="registry-server" Feb 18 20:20:09 crc kubenswrapper[4932]: I0218 20:20:09.776195 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="98eb30d5-e437-4090-a44b-84245137fb3c" containerName="registry-server" Feb 18 20:20:09 crc kubenswrapper[4932]: E0218 20:20:09.776221 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98eb30d5-e437-4090-a44b-84245137fb3c" containerName="extract-utilities" Feb 18 20:20:09 crc kubenswrapper[4932]: I0218 20:20:09.776229 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="98eb30d5-e437-4090-a44b-84245137fb3c" containerName="extract-utilities" Feb 18 20:20:09 crc kubenswrapper[4932]: E0218 20:20:09.776264 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98eb30d5-e437-4090-a44b-84245137fb3c" containerName="extract-content" Feb 18 20:20:09 crc kubenswrapper[4932]: I0218 20:20:09.776303 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="98eb30d5-e437-4090-a44b-84245137fb3c" containerName="extract-content" Feb 18 20:20:09 crc kubenswrapper[4932]: I0218 20:20:09.776564 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="98eb30d5-e437-4090-a44b-84245137fb3c" containerName="registry-server" Feb 18 20:20:09 crc kubenswrapper[4932]: I0218 20:20:09.778390 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5g8jx" Feb 18 20:20:09 crc kubenswrapper[4932]: I0218 20:20:09.798292 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5g8jx"] Feb 18 20:20:09 crc kubenswrapper[4932]: I0218 20:20:09.845284 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5411e325-db57-464b-b5cd-312b4dd719a6-catalog-content\") pod \"redhat-marketplace-5g8jx\" (UID: \"5411e325-db57-464b-b5cd-312b4dd719a6\") " pod="openshift-marketplace/redhat-marketplace-5g8jx" Feb 18 20:20:09 crc kubenswrapper[4932]: I0218 20:20:09.845634 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5411e325-db57-464b-b5cd-312b4dd719a6-utilities\") pod \"redhat-marketplace-5g8jx\" (UID: \"5411e325-db57-464b-b5cd-312b4dd719a6\") " pod="openshift-marketplace/redhat-marketplace-5g8jx" Feb 18 20:20:09 crc kubenswrapper[4932]: I0218 20:20:09.845779 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7sds\" (UniqueName: \"kubernetes.io/projected/5411e325-db57-464b-b5cd-312b4dd719a6-kube-api-access-q7sds\") pod \"redhat-marketplace-5g8jx\" (UID: \"5411e325-db57-464b-b5cd-312b4dd719a6\") " pod="openshift-marketplace/redhat-marketplace-5g8jx" Feb 18 20:20:09 crc kubenswrapper[4932]: I0218 20:20:09.947780 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5411e325-db57-464b-b5cd-312b4dd719a6-utilities\") pod \"redhat-marketplace-5g8jx\" (UID: \"5411e325-db57-464b-b5cd-312b4dd719a6\") " pod="openshift-marketplace/redhat-marketplace-5g8jx" Feb 18 20:20:09 crc kubenswrapper[4932]: I0218 20:20:09.947889 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7sds\" (UniqueName: \"kubernetes.io/projected/5411e325-db57-464b-b5cd-312b4dd719a6-kube-api-access-q7sds\") pod \"redhat-marketplace-5g8jx\" (UID: \"5411e325-db57-464b-b5cd-312b4dd719a6\") " pod="openshift-marketplace/redhat-marketplace-5g8jx" Feb 18 20:20:09 crc kubenswrapper[4932]: I0218 20:20:09.948075 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5411e325-db57-464b-b5cd-312b4dd719a6-catalog-content\") pod \"redhat-marketplace-5g8jx\" (UID: \"5411e325-db57-464b-b5cd-312b4dd719a6\") " pod="openshift-marketplace/redhat-marketplace-5g8jx" Feb 18 20:20:09 crc kubenswrapper[4932]: I0218 20:20:09.948417 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5411e325-db57-464b-b5cd-312b4dd719a6-utilities\") pod \"redhat-marketplace-5g8jx\" (UID: \"5411e325-db57-464b-b5cd-312b4dd719a6\") " pod="openshift-marketplace/redhat-marketplace-5g8jx" Feb 18 20:20:09 crc kubenswrapper[4932]: I0218 20:20:09.948468 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5411e325-db57-464b-b5cd-312b4dd719a6-catalog-content\") pod \"redhat-marketplace-5g8jx\" (UID: \"5411e325-db57-464b-b5cd-312b4dd719a6\") " pod="openshift-marketplace/redhat-marketplace-5g8jx" Feb 18 20:20:09 crc kubenswrapper[4932]: I0218 20:20:09.973575 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7sds\" (UniqueName: \"kubernetes.io/projected/5411e325-db57-464b-b5cd-312b4dd719a6-kube-api-access-q7sds\") pod \"redhat-marketplace-5g8jx\" (UID: \"5411e325-db57-464b-b5cd-312b4dd719a6\") " pod="openshift-marketplace/redhat-marketplace-5g8jx" Feb 18 20:20:10 crc kubenswrapper[4932]: I0218 20:20:10.111240 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5g8jx" Feb 18 20:20:10 crc kubenswrapper[4932]: I0218 20:20:10.266464 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x4fhd" podUID="9c726726-9ae9-4956-9999-09c956029615" containerName="registry-server" containerID="cri-o://ecc41c6de44b323b6a9eb516b14711e5431dc2b7a5271228b87b570b3298928a" gracePeriod=2 Feb 18 20:20:10 crc kubenswrapper[4932]: I0218 20:20:10.616729 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5g8jx"] Feb 18 20:20:10 crc kubenswrapper[4932]: I0218 20:20:10.703560 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4fhd" Feb 18 20:20:10 crc kubenswrapper[4932]: I0218 20:20:10.769048 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c726726-9ae9-4956-9999-09c956029615-catalog-content\") pod \"9c726726-9ae9-4956-9999-09c956029615\" (UID: \"9c726726-9ae9-4956-9999-09c956029615\") " Feb 18 20:20:10 crc kubenswrapper[4932]: I0218 20:20:10.769094 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c726726-9ae9-4956-9999-09c956029615-utilities\") pod \"9c726726-9ae9-4956-9999-09c956029615\" (UID: \"9c726726-9ae9-4956-9999-09c956029615\") " Feb 18 20:20:10 crc kubenswrapper[4932]: I0218 20:20:10.769214 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tl7q\" (UniqueName: \"kubernetes.io/projected/9c726726-9ae9-4956-9999-09c956029615-kube-api-access-9tl7q\") pod \"9c726726-9ae9-4956-9999-09c956029615\" (UID: \"9c726726-9ae9-4956-9999-09c956029615\") " Feb 18 20:20:10 crc kubenswrapper[4932]: I0218 20:20:10.769892 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c726726-9ae9-4956-9999-09c956029615-utilities" (OuterVolumeSpecName: "utilities") pod "9c726726-9ae9-4956-9999-09c956029615" (UID: "9c726726-9ae9-4956-9999-09c956029615"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:20:10 crc kubenswrapper[4932]: I0218 20:20:10.774596 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c726726-9ae9-4956-9999-09c956029615-kube-api-access-9tl7q" (OuterVolumeSpecName: "kube-api-access-9tl7q") pod "9c726726-9ae9-4956-9999-09c956029615" (UID: "9c726726-9ae9-4956-9999-09c956029615"). InnerVolumeSpecName "kube-api-access-9tl7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:20:10 crc kubenswrapper[4932]: I0218 20:20:10.871647 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tl7q\" (UniqueName: \"kubernetes.io/projected/9c726726-9ae9-4956-9999-09c956029615-kube-api-access-9tl7q\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:10 crc kubenswrapper[4932]: I0218 20:20:10.871684 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c726726-9ae9-4956-9999-09c956029615-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:10 crc kubenswrapper[4932]: I0218 20:20:10.880550 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c726726-9ae9-4956-9999-09c956029615-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c726726-9ae9-4956-9999-09c956029615" (UID: "9c726726-9ae9-4956-9999-09c956029615"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:20:10 crc kubenswrapper[4932]: I0218 20:20:10.973825 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c726726-9ae9-4956-9999-09c956029615-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.279682 4932 generic.go:334] "Generic (PLEG): container finished" podID="9c726726-9ae9-4956-9999-09c956029615" containerID="ecc41c6de44b323b6a9eb516b14711e5431dc2b7a5271228b87b570b3298928a" exitCode=0 Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.279752 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4fhd" event={"ID":"9c726726-9ae9-4956-9999-09c956029615","Type":"ContainerDied","Data":"ecc41c6de44b323b6a9eb516b14711e5431dc2b7a5271228b87b570b3298928a"} Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.279786 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4fhd" Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.279804 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4fhd" event={"ID":"9c726726-9ae9-4956-9999-09c956029615","Type":"ContainerDied","Data":"e093f1cca327bf041c93f56c487c61e59e1e403678b96164f3bbb1c6097b672a"} Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.279836 4932 scope.go:117] "RemoveContainer" containerID="ecc41c6de44b323b6a9eb516b14711e5431dc2b7a5271228b87b570b3298928a" Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.285090 4932 generic.go:334] "Generic (PLEG): container finished" podID="5411e325-db57-464b-b5cd-312b4dd719a6" containerID="c81393fa71a13e1dac8b1fd09abec1a7ca8390f38bac88e9dc9b10b20c9291fe" exitCode=0 Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.285142 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5g8jx" event={"ID":"5411e325-db57-464b-b5cd-312b4dd719a6","Type":"ContainerDied","Data":"c81393fa71a13e1dac8b1fd09abec1a7ca8390f38bac88e9dc9b10b20c9291fe"} Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.285206 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5g8jx" event={"ID":"5411e325-db57-464b-b5cd-312b4dd719a6","Type":"ContainerStarted","Data":"1e58f58be7fdf29dc380a3f94ed9e5b8c8d93390baacd2a911ef7c5416afd603"} Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.322789 4932 scope.go:117] "RemoveContainer" containerID="c34916e76b220bbc9b7adb905e2b92b701443ef539041d81b48bf956f1c2fb1f" Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.358754 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x4fhd"] Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.368625 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x4fhd"] Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.372211 4932 scope.go:117] "RemoveContainer" containerID="d84e8cc527cc17b9aced3793146dcdc81638d7d698d4258b2a58f919fbe804cc" Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.425853 4932 scope.go:117] "RemoveContainer" containerID="ecc41c6de44b323b6a9eb516b14711e5431dc2b7a5271228b87b570b3298928a" Feb 18 20:20:11 crc kubenswrapper[4932]: E0218 20:20:11.426353 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecc41c6de44b323b6a9eb516b14711e5431dc2b7a5271228b87b570b3298928a\": container with ID starting with ecc41c6de44b323b6a9eb516b14711e5431dc2b7a5271228b87b570b3298928a not found: ID does not exist" containerID="ecc41c6de44b323b6a9eb516b14711e5431dc2b7a5271228b87b570b3298928a" Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.426417 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecc41c6de44b323b6a9eb516b14711e5431dc2b7a5271228b87b570b3298928a"} err="failed to get container status \"ecc41c6de44b323b6a9eb516b14711e5431dc2b7a5271228b87b570b3298928a\": rpc error: code = NotFound desc = could not find container \"ecc41c6de44b323b6a9eb516b14711e5431dc2b7a5271228b87b570b3298928a\": container with ID starting with ecc41c6de44b323b6a9eb516b14711e5431dc2b7a5271228b87b570b3298928a not found: ID does not exist" Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.426453 4932 scope.go:117] "RemoveContainer" containerID="c34916e76b220bbc9b7adb905e2b92b701443ef539041d81b48bf956f1c2fb1f" Feb 18 20:20:11 crc kubenswrapper[4932]: E0218 20:20:11.427127 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c34916e76b220bbc9b7adb905e2b92b701443ef539041d81b48bf956f1c2fb1f\": container with ID starting with c34916e76b220bbc9b7adb905e2b92b701443ef539041d81b48bf956f1c2fb1f not found: ID does not exist" containerID="c34916e76b220bbc9b7adb905e2b92b701443ef539041d81b48bf956f1c2fb1f" Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.427169 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c34916e76b220bbc9b7adb905e2b92b701443ef539041d81b48bf956f1c2fb1f"} err="failed to get container status \"c34916e76b220bbc9b7adb905e2b92b701443ef539041d81b48bf956f1c2fb1f\": rpc error: code = NotFound desc = could not find container \"c34916e76b220bbc9b7adb905e2b92b701443ef539041d81b48bf956f1c2fb1f\": container with ID starting with c34916e76b220bbc9b7adb905e2b92b701443ef539041d81b48bf956f1c2fb1f not found: ID does not exist" Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.427213 4932 scope.go:117] "RemoveContainer" containerID="d84e8cc527cc17b9aced3793146dcdc81638d7d698d4258b2a58f919fbe804cc" Feb 18 20:20:11 crc kubenswrapper[4932]: E0218 20:20:11.427673 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d84e8cc527cc17b9aced3793146dcdc81638d7d698d4258b2a58f919fbe804cc\": container with ID starting with d84e8cc527cc17b9aced3793146dcdc81638d7d698d4258b2a58f919fbe804cc not found: ID does not exist" containerID="d84e8cc527cc17b9aced3793146dcdc81638d7d698d4258b2a58f919fbe804cc" Feb 18 20:20:11 crc kubenswrapper[4932]: I0218 20:20:11.427700 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d84e8cc527cc17b9aced3793146dcdc81638d7d698d4258b2a58f919fbe804cc"} err="failed to get container status \"d84e8cc527cc17b9aced3793146dcdc81638d7d698d4258b2a58f919fbe804cc\": rpc error: code = NotFound desc = could not find container \"d84e8cc527cc17b9aced3793146dcdc81638d7d698d4258b2a58f919fbe804cc\": container with ID starting with d84e8cc527cc17b9aced3793146dcdc81638d7d698d4258b2a58f919fbe804cc not found: ID does not exist" Feb 18 20:20:12 crc kubenswrapper[4932]: I0218 20:20:12.301679 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5g8jx" event={"ID":"5411e325-db57-464b-b5cd-312b4dd719a6","Type":"ContainerStarted","Data":"75c5bb9da4a73d33078398daa979f564a3c3296cc55b1e5c5b96e31ab80eea53"} Feb 18 20:20:13 crc kubenswrapper[4932]: I0218 20:20:13.195075 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c726726-9ae9-4956-9999-09c956029615" path="/var/lib/kubelet/pods/9c726726-9ae9-4956-9999-09c956029615/volumes" Feb 18 20:20:13 crc kubenswrapper[4932]: I0218 20:20:13.315851 4932 generic.go:334] "Generic (PLEG): container finished" podID="5411e325-db57-464b-b5cd-312b4dd719a6" containerID="75c5bb9da4a73d33078398daa979f564a3c3296cc55b1e5c5b96e31ab80eea53" exitCode=0 Feb 18 20:20:13 crc kubenswrapper[4932]: I0218 20:20:13.315899 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5g8jx" event={"ID":"5411e325-db57-464b-b5cd-312b4dd719a6","Type":"ContainerDied","Data":"75c5bb9da4a73d33078398daa979f564a3c3296cc55b1e5c5b96e31ab80eea53"} Feb 18 20:20:14 crc kubenswrapper[4932]: I0218 20:20:14.332523 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5g8jx" event={"ID":"5411e325-db57-464b-b5cd-312b4dd719a6","Type":"ContainerStarted","Data":"f81e640be0999b24af3c9528df1cf52bb1a73d147a0383579175024c8d4eb88b"} Feb 18 20:20:14 crc kubenswrapper[4932]: I0218 20:20:14.369842 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5g8jx" podStartSLOduration=2.878987376 podStartE2EDuration="5.369817444s" podCreationTimestamp="2026-02-18 20:20:09 +0000 UTC" firstStartedPulling="2026-02-18 20:20:11.289446054 +0000 UTC m=+2774.871400939" lastFinishedPulling="2026-02-18 20:20:13.780276132 +0000 UTC m=+2777.362231007" observedRunningTime="2026-02-18 20:20:14.360922644 +0000 UTC m=+2777.942877529" watchObservedRunningTime="2026-02-18 20:20:14.369817444 +0000 UTC m=+2777.951772299" Feb 18 20:20:20 crc kubenswrapper[4932]: I0218 20:20:20.111608 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5g8jx" Feb 18 20:20:20 crc kubenswrapper[4932]: I0218 20:20:20.112155 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5g8jx" Feb 18 20:20:20 crc kubenswrapper[4932]: I0218 20:20:20.203050 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5g8jx" Feb 18 20:20:20 crc kubenswrapper[4932]: I0218 20:20:20.456751 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5g8jx" Feb 18 20:20:20 crc kubenswrapper[4932]: I0218 20:20:20.511677 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5g8jx"] Feb 18 20:20:22 crc kubenswrapper[4932]: I0218 20:20:22.423969 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5g8jx" podUID="5411e325-db57-464b-b5cd-312b4dd719a6" containerName="registry-server" containerID="cri-o://f81e640be0999b24af3c9528df1cf52bb1a73d147a0383579175024c8d4eb88b" gracePeriod=2 Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:22.974568 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5g8jx" Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.070353 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5411e325-db57-464b-b5cd-312b4dd719a6-catalog-content\") pod \"5411e325-db57-464b-b5cd-312b4dd719a6\" (UID: \"5411e325-db57-464b-b5cd-312b4dd719a6\") " Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.070467 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7sds\" (UniqueName: \"kubernetes.io/projected/5411e325-db57-464b-b5cd-312b4dd719a6-kube-api-access-q7sds\") pod \"5411e325-db57-464b-b5cd-312b4dd719a6\" (UID: \"5411e325-db57-464b-b5cd-312b4dd719a6\") " Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.070539 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5411e325-db57-464b-b5cd-312b4dd719a6-utilities\") pod \"5411e325-db57-464b-b5cd-312b4dd719a6\" (UID: \"5411e325-db57-464b-b5cd-312b4dd719a6\") " Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.071814 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5411e325-db57-464b-b5cd-312b4dd719a6-utilities" (OuterVolumeSpecName: "utilities") pod "5411e325-db57-464b-b5cd-312b4dd719a6" (UID: "5411e325-db57-464b-b5cd-312b4dd719a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.078581 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5411e325-db57-464b-b5cd-312b4dd719a6-kube-api-access-q7sds" (OuterVolumeSpecName: "kube-api-access-q7sds") pod "5411e325-db57-464b-b5cd-312b4dd719a6" (UID: "5411e325-db57-464b-b5cd-312b4dd719a6"). InnerVolumeSpecName "kube-api-access-q7sds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.098618 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5411e325-db57-464b-b5cd-312b4dd719a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5411e325-db57-464b-b5cd-312b4dd719a6" (UID: "5411e325-db57-464b-b5cd-312b4dd719a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.173104 4932 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5411e325-db57-464b-b5cd-312b4dd719a6-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.173130 4932 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5411e325-db57-464b-b5cd-312b4dd719a6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.173140 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7sds\" (UniqueName: \"kubernetes.io/projected/5411e325-db57-464b-b5cd-312b4dd719a6-kube-api-access-q7sds\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.436834 4932 generic.go:334] "Generic (PLEG): container finished" podID="5411e325-db57-464b-b5cd-312b4dd719a6" containerID="f81e640be0999b24af3c9528df1cf52bb1a73d147a0383579175024c8d4eb88b" exitCode=0 Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.436896 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5g8jx" Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.436917 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5g8jx" event={"ID":"5411e325-db57-464b-b5cd-312b4dd719a6","Type":"ContainerDied","Data":"f81e640be0999b24af3c9528df1cf52bb1a73d147a0383579175024c8d4eb88b"} Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.437733 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5g8jx" event={"ID":"5411e325-db57-464b-b5cd-312b4dd719a6","Type":"ContainerDied","Data":"1e58f58be7fdf29dc380a3f94ed9e5b8c8d93390baacd2a911ef7c5416afd603"} Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.437773 4932 scope.go:117] "RemoveContainer" containerID="f81e640be0999b24af3c9528df1cf52bb1a73d147a0383579175024c8d4eb88b" Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.468696 4932 scope.go:117] "RemoveContainer" containerID="75c5bb9da4a73d33078398daa979f564a3c3296cc55b1e5c5b96e31ab80eea53" Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.489256 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5g8jx"] Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.493641 4932 scope.go:117] "RemoveContainer" containerID="c81393fa71a13e1dac8b1fd09abec1a7ca8390f38bac88e9dc9b10b20c9291fe" Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.501372 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5g8jx"] Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.559997 4932 scope.go:117] "RemoveContainer" containerID="f81e640be0999b24af3c9528df1cf52bb1a73d147a0383579175024c8d4eb88b" Feb 18 20:20:23 crc kubenswrapper[4932]: E0218 20:20:23.560981 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f81e640be0999b24af3c9528df1cf52bb1a73d147a0383579175024c8d4eb88b\": container with ID starting with f81e640be0999b24af3c9528df1cf52bb1a73d147a0383579175024c8d4eb88b not found: ID does not exist" containerID="f81e640be0999b24af3c9528df1cf52bb1a73d147a0383579175024c8d4eb88b" Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.561032 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f81e640be0999b24af3c9528df1cf52bb1a73d147a0383579175024c8d4eb88b"} err="failed to get container status \"f81e640be0999b24af3c9528df1cf52bb1a73d147a0383579175024c8d4eb88b\": rpc error: code = NotFound desc = could not find container \"f81e640be0999b24af3c9528df1cf52bb1a73d147a0383579175024c8d4eb88b\": container with ID starting with f81e640be0999b24af3c9528df1cf52bb1a73d147a0383579175024c8d4eb88b not found: ID does not exist" Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.561066 4932 scope.go:117] "RemoveContainer" containerID="75c5bb9da4a73d33078398daa979f564a3c3296cc55b1e5c5b96e31ab80eea53" Feb 18 20:20:23 crc kubenswrapper[4932]: E0218 20:20:23.561504 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75c5bb9da4a73d33078398daa979f564a3c3296cc55b1e5c5b96e31ab80eea53\": container with ID starting with 75c5bb9da4a73d33078398daa979f564a3c3296cc55b1e5c5b96e31ab80eea53 not found: ID does not exist" containerID="75c5bb9da4a73d33078398daa979f564a3c3296cc55b1e5c5b96e31ab80eea53" Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.561524 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75c5bb9da4a73d33078398daa979f564a3c3296cc55b1e5c5b96e31ab80eea53"} err="failed to get container status \"75c5bb9da4a73d33078398daa979f564a3c3296cc55b1e5c5b96e31ab80eea53\": rpc error: code = NotFound desc = could not find container \"75c5bb9da4a73d33078398daa979f564a3c3296cc55b1e5c5b96e31ab80eea53\": container with ID starting with 75c5bb9da4a73d33078398daa979f564a3c3296cc55b1e5c5b96e31ab80eea53 not found: ID does not exist" Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.561537 4932 scope.go:117] "RemoveContainer" containerID="c81393fa71a13e1dac8b1fd09abec1a7ca8390f38bac88e9dc9b10b20c9291fe" Feb 18 20:20:23 crc kubenswrapper[4932]: E0218 20:20:23.561782 4932 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c81393fa71a13e1dac8b1fd09abec1a7ca8390f38bac88e9dc9b10b20c9291fe\": container with ID starting with c81393fa71a13e1dac8b1fd09abec1a7ca8390f38bac88e9dc9b10b20c9291fe not found: ID does not exist" containerID="c81393fa71a13e1dac8b1fd09abec1a7ca8390f38bac88e9dc9b10b20c9291fe" Feb 18 20:20:23 crc kubenswrapper[4932]: I0218 20:20:23.561798 4932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c81393fa71a13e1dac8b1fd09abec1a7ca8390f38bac88e9dc9b10b20c9291fe"} err="failed to get container status \"c81393fa71a13e1dac8b1fd09abec1a7ca8390f38bac88e9dc9b10b20c9291fe\": rpc error: code = NotFound desc = could not find container \"c81393fa71a13e1dac8b1fd09abec1a7ca8390f38bac88e9dc9b10b20c9291fe\": container with ID starting with c81393fa71a13e1dac8b1fd09abec1a7ca8390f38bac88e9dc9b10b20c9291fe not found: ID does not exist" Feb 18 20:20:25 crc kubenswrapper[4932]: I0218 20:20:25.194635 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5411e325-db57-464b-b5cd-312b4dd719a6" path="/var/lib/kubelet/pods/5411e325-db57-464b-b5cd-312b4dd719a6/volumes" Feb 18 20:20:44 crc kubenswrapper[4932]: I0218 20:20:44.833758 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:20:44 crc kubenswrapper[4932]: I0218 20:20:44.834676 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerName="prometheus" containerID="cri-o://361657e74a3f41f1c11b35878117fcf352b08b255d1c2d6041c3ed746c1fd2c2" gracePeriod=600 Feb 18 20:20:44 crc kubenswrapper[4932]: I0218 20:20:44.834796 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerName="thanos-sidecar" containerID="cri-o://87593181676f68ce6f705683e7d0d7ac8f773d82d9f3858c223d1a3115fbc1c5" gracePeriod=600 Feb 18 20:20:44 crc kubenswrapper[4932]: I0218 20:20:44.834796 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerName="config-reloader" containerID="cri-o://d0b5bb5f9b3d94768e061de73d45369ab8df4d6880aaa6f295ec1ea349cbcc2b" gracePeriod=600 Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.708964 4932 generic.go:334] "Generic (PLEG): container finished" podID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerID="87593181676f68ce6f705683e7d0d7ac8f773d82d9f3858c223d1a3115fbc1c5" exitCode=0 Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.709249 4932 generic.go:334] "Generic (PLEG): container finished" podID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerID="d0b5bb5f9b3d94768e061de73d45369ab8df4d6880aaa6f295ec1ea349cbcc2b" exitCode=0 Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.709268 4932 generic.go:334] "Generic (PLEG): container finished" podID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerID="361657e74a3f41f1c11b35878117fcf352b08b255d1c2d6041c3ed746c1fd2c2" exitCode=0 Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.709051 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f1783f11-a79f-49d9-a637-224863cdb0ad","Type":"ContainerDied","Data":"87593181676f68ce6f705683e7d0d7ac8f773d82d9f3858c223d1a3115fbc1c5"} Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.709305 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f1783f11-a79f-49d9-a637-224863cdb0ad","Type":"ContainerDied","Data":"d0b5bb5f9b3d94768e061de73d45369ab8df4d6880aaa6f295ec1ea349cbcc2b"} Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.709318 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f1783f11-a79f-49d9-a637-224863cdb0ad","Type":"ContainerDied","Data":"361657e74a3f41f1c11b35878117fcf352b08b255d1c2d6041c3ed746c1fd2c2"} Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.882611 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.932787 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-2\") pod \"f1783f11-a79f-49d9-a637-224863cdb0ad\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.932873 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config\") pod \"f1783f11-a79f-49d9-a637-224863cdb0ad\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.932895 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-secret-combined-ca-bundle\") pod \"f1783f11-a79f-49d9-a637-224863cdb0ad\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.932924 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f1783f11-a79f-49d9-a637-224863cdb0ad-tls-assets\") pod \"f1783f11-a79f-49d9-a637-224863cdb0ad\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.932993 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-config\") pod \"f1783f11-a79f-49d9-a637-224863cdb0ad\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.933036 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f1783f11-a79f-49d9-a637-224863cdb0ad-config-out\") pod \"f1783f11-a79f-49d9-a637-224863cdb0ad\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.933056 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-1\") pod \"f1783f11-a79f-49d9-a637-224863cdb0ad\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.933100 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"f1783f11-a79f-49d9-a637-224863cdb0ad\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.933567 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "f1783f11-a79f-49d9-a637-224863cdb0ad" (UID: "f1783f11-a79f-49d9-a637-224863cdb0ad"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.933831 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") pod \"f1783f11-a79f-49d9-a637-224863cdb0ad\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.933865 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-0\") pod \"f1783f11-a79f-49d9-a637-224863cdb0ad\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.933923 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-thanos-prometheus-http-client-file\") pod \"f1783f11-a79f-49d9-a637-224863cdb0ad\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.933984 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"f1783f11-a79f-49d9-a637-224863cdb0ad\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.934081 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnmwk\" (UniqueName: \"kubernetes.io/projected/f1783f11-a79f-49d9-a637-224863cdb0ad-kube-api-access-fnmwk\") pod \"f1783f11-a79f-49d9-a637-224863cdb0ad\" (UID: \"f1783f11-a79f-49d9-a637-224863cdb0ad\") " Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.934703 4932 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.935621 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "f1783f11-a79f-49d9-a637-224863cdb0ad" (UID: "f1783f11-a79f-49d9-a637-224863cdb0ad"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.935940 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "f1783f11-a79f-49d9-a637-224863cdb0ad" (UID: "f1783f11-a79f-49d9-a637-224863cdb0ad"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.991418 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "f1783f11-a79f-49d9-a637-224863cdb0ad" (UID: "f1783f11-a79f-49d9-a637-224863cdb0ad"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.991534 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "f1783f11-a79f-49d9-a637-224863cdb0ad" (UID: "f1783f11-a79f-49d9-a637-224863cdb0ad"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.991658 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1783f11-a79f-49d9-a637-224863cdb0ad-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "f1783f11-a79f-49d9-a637-224863cdb0ad" (UID: "f1783f11-a79f-49d9-a637-224863cdb0ad"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.991725 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1783f11-a79f-49d9-a637-224863cdb0ad-config-out" (OuterVolumeSpecName: "config-out") pod "f1783f11-a79f-49d9-a637-224863cdb0ad" (UID: "f1783f11-a79f-49d9-a637-224863cdb0ad"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.991750 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1783f11-a79f-49d9-a637-224863cdb0ad-kube-api-access-fnmwk" (OuterVolumeSpecName: "kube-api-access-fnmwk") pod "f1783f11-a79f-49d9-a637-224863cdb0ad" (UID: "f1783f11-a79f-49d9-a637-224863cdb0ad"). InnerVolumeSpecName "kube-api-access-fnmwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.994343 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-config" (OuterVolumeSpecName: "config") pod "f1783f11-a79f-49d9-a637-224863cdb0ad" (UID: "f1783f11-a79f-49d9-a637-224863cdb0ad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:20:45 crc kubenswrapper[4932]: I0218 20:20:45.995337 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "f1783f11-a79f-49d9-a637-224863cdb0ad" (UID: "f1783f11-a79f-49d9-a637-224863cdb0ad"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.007311 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "f1783f11-a79f-49d9-a637-224863cdb0ad" (UID: "f1783f11-a79f-49d9-a637-224863cdb0ad"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.040563 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "f1783f11-a79f-49d9-a637-224863cdb0ad" (UID: "f1783f11-a79f-49d9-a637-224863cdb0ad"). InnerVolumeSpecName "pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.079480 4932 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-config\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.079524 4932 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f1783f11-a79f-49d9-a637-224863cdb0ad-config-out\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.079534 4932 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.079546 4932 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.079575 4932 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") on node \"crc\" " Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.079586 4932 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f1783f11-a79f-49d9-a637-224863cdb0ad-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.079595 4932 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.079606 4932 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.079616 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnmwk\" (UniqueName: \"kubernetes.io/projected/f1783f11-a79f-49d9-a637-224863cdb0ad-kube-api-access-fnmwk\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.079628 4932 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.079640 4932 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f1783f11-a79f-49d9-a637-224863cdb0ad-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.194440 4932 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.194592 4932 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69") on node "crc" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.205318 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config" (OuterVolumeSpecName: "web-config") pod "f1783f11-a79f-49d9-a637-224863cdb0ad" (UID: "f1783f11-a79f-49d9-a637-224863cdb0ad"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.287853 4932 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f1783f11-a79f-49d9-a637-224863cdb0ad-web-config\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.287905 4932 reconciler_common.go:293] "Volume detached for volume \"pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") on node \"crc\" DevicePath \"\"" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.722509 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f1783f11-a79f-49d9-a637-224863cdb0ad","Type":"ContainerDied","Data":"517321ee2b5c108f37907af390aff2f58338e81a6d4f29d0b1fb1230f8840a63"} Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.722778 4932 scope.go:117] "RemoveContainer" containerID="87593181676f68ce6f705683e7d0d7ac8f773d82d9f3858c223d1a3115fbc1c5" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.722645 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.747492 4932 scope.go:117] "RemoveContainer" containerID="d0b5bb5f9b3d94768e061de73d45369ab8df4d6880aaa6f295ec1ea349cbcc2b" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.789047 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.791671 4932 scope.go:117] "RemoveContainer" containerID="361657e74a3f41f1c11b35878117fcf352b08b255d1c2d6041c3ed746c1fd2c2" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.797776 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.830999 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:20:46 crc kubenswrapper[4932]: E0218 20:20:46.831425 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c726726-9ae9-4956-9999-09c956029615" containerName="registry-server" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.831441 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c726726-9ae9-4956-9999-09c956029615" containerName="registry-server" Feb 18 20:20:46 crc kubenswrapper[4932]: E0218 20:20:46.831454 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5411e325-db57-464b-b5cd-312b4dd719a6" containerName="registry-server" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.831460 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="5411e325-db57-464b-b5cd-312b4dd719a6" containerName="registry-server" Feb 18 20:20:46 crc kubenswrapper[4932]: E0218 20:20:46.831472 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerName="init-config-reloader" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.831480 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerName="init-config-reloader" Feb 18 20:20:46 crc kubenswrapper[4932]: E0218 20:20:46.831489 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c726726-9ae9-4956-9999-09c956029615" containerName="extract-utilities" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.831494 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c726726-9ae9-4956-9999-09c956029615" containerName="extract-utilities" Feb 18 20:20:46 crc kubenswrapper[4932]: E0218 20:20:46.831506 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerName="config-reloader" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.831511 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerName="config-reloader" Feb 18 20:20:46 crc kubenswrapper[4932]: E0218 20:20:46.831524 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerName="prometheus" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.831529 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerName="prometheus" Feb 18 20:20:46 crc kubenswrapper[4932]: E0218 20:20:46.831546 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c726726-9ae9-4956-9999-09c956029615" containerName="extract-content" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.831552 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c726726-9ae9-4956-9999-09c956029615" containerName="extract-content" Feb 18 20:20:46 crc kubenswrapper[4932]: E0218 20:20:46.831566 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerName="thanos-sidecar" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.831571 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerName="thanos-sidecar" Feb 18 20:20:46 crc kubenswrapper[4932]: E0218 20:20:46.831581 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5411e325-db57-464b-b5cd-312b4dd719a6" containerName="extract-utilities" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.831588 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="5411e325-db57-464b-b5cd-312b4dd719a6" containerName="extract-utilities" Feb 18 20:20:46 crc kubenswrapper[4932]: E0218 20:20:46.831611 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5411e325-db57-464b-b5cd-312b4dd719a6" containerName="extract-content" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.831616 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="5411e325-db57-464b-b5cd-312b4dd719a6" containerName="extract-content" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.831777 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerName="thanos-sidecar" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.831790 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c726726-9ae9-4956-9999-09c956029615" containerName="registry-server" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.831804 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerName="config-reloader" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.831817 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1783f11-a79f-49d9-a637-224863cdb0ad" containerName="prometheus" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.831824 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="5411e325-db57-464b-b5cd-312b4dd719a6" containerName="registry-server" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.834783 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.835523 4932 scope.go:117] "RemoveContainer" containerID="81f9d76b429826048a1f76e9841d9bd5c8224e1c54ca1834ee1d11eed8e3afa6" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.839107 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.839118 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.839390 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-5jcnf" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.839459 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.839406 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.839645 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.843084 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.846661 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:20:46 crc kubenswrapper[4932]: I0218 20:20:46.848629 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.004977 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/406a2738-127b-4d6d-8de4-3f5d88896b4c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.005067 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.005182 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/406a2738-127b-4d6d-8de4-3f5d88896b4c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.005293 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.005360 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.005399 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.005462 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/406a2738-127b-4d6d-8de4-3f5d88896b4c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.005565 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-config\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.005602 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/406a2738-127b-4d6d-8de4-3f5d88896b4c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.005665 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.005735 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.005769 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpwcp\" (UniqueName: \"kubernetes.io/projected/406a2738-127b-4d6d-8de4-3f5d88896b4c-kube-api-access-dpwcp\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.005862 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/406a2738-127b-4d6d-8de4-3f5d88896b4c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.107220 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-config\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.107290 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/406a2738-127b-4d6d-8de4-3f5d88896b4c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.107344 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.107388 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.107422 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpwcp\" (UniqueName: \"kubernetes.io/projected/406a2738-127b-4d6d-8de4-3f5d88896b4c-kube-api-access-dpwcp\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.107458 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/406a2738-127b-4d6d-8de4-3f5d88896b4c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.107486 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/406a2738-127b-4d6d-8de4-3f5d88896b4c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.107520 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.107566 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/406a2738-127b-4d6d-8de4-3f5d88896b4c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.107625 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.107654 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.107689 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.107728 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/406a2738-127b-4d6d-8de4-3f5d88896b4c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.108746 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/406a2738-127b-4d6d-8de4-3f5d88896b4c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.109414 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/406a2738-127b-4d6d-8de4-3f5d88896b4c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.111350 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/406a2738-127b-4d6d-8de4-3f5d88896b4c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.114503 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.114914 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-config\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.118839 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/406a2738-127b-4d6d-8de4-3f5d88896b4c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.119272 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.119398 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.119489 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.119573 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/406a2738-127b-4d6d-8de4-3f5d88896b4c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.119725 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/406a2738-127b-4d6d-8de4-3f5d88896b4c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.121276 4932 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.121307 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e039419306e79ade7652e80c67474011a5658585fd3b39d0b236ffa94ab5d0db/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.131712 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpwcp\" (UniqueName: \"kubernetes.io/projected/406a2738-127b-4d6d-8de4-3f5d88896b4c-kube-api-access-dpwcp\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.176444 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b941694d-9ec5-4273-a8a3-59a9821c5e69\") pod \"prometheus-metric-storage-0\" (UID: \"406a2738-127b-4d6d-8de4-3f5d88896b4c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.191128 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1783f11-a79f-49d9-a637-224863cdb0ad" path="/var/lib/kubelet/pods/f1783f11-a79f-49d9-a637-224863cdb0ad/volumes" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.218403 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.693802 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:20:47 crc kubenswrapper[4932]: W0218 20:20:47.698817 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod406a2738_127b_4d6d_8de4_3f5d88896b4c.slice/crio-2339bcc9c1d5fd8d404e3bf814a2e8e4d03d1d62d9d3e7919378f3a98010c483 WatchSource:0}: Error finding container 2339bcc9c1d5fd8d404e3bf814a2e8e4d03d1d62d9d3e7919378f3a98010c483: Status 404 returned error can't find the container with id 2339bcc9c1d5fd8d404e3bf814a2e8e4d03d1d62d9d3e7919378f3a98010c483 Feb 18 20:20:47 crc kubenswrapper[4932]: I0218 20:20:47.741285 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"406a2738-127b-4d6d-8de4-3f5d88896b4c","Type":"ContainerStarted","Data":"2339bcc9c1d5fd8d404e3bf814a2e8e4d03d1d62d9d3e7919378f3a98010c483"} Feb 18 20:20:51 crc kubenswrapper[4932]: I0218 20:20:51.784754 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"406a2738-127b-4d6d-8de4-3f5d88896b4c","Type":"ContainerStarted","Data":"0efee390723588d24468eabdbf10c5a2ebb771b098b4b1e7b752a6aa1567498d"} Feb 18 20:21:00 crc kubenswrapper[4932]: I0218 20:21:00.889541 4932 generic.go:334] "Generic (PLEG): container finished" podID="406a2738-127b-4d6d-8de4-3f5d88896b4c" containerID="0efee390723588d24468eabdbf10c5a2ebb771b098b4b1e7b752a6aa1567498d" exitCode=0 Feb 18 20:21:00 crc kubenswrapper[4932]: I0218 20:21:00.889681 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"406a2738-127b-4d6d-8de4-3f5d88896b4c","Type":"ContainerDied","Data":"0efee390723588d24468eabdbf10c5a2ebb771b098b4b1e7b752a6aa1567498d"} Feb 18 20:21:01 crc kubenswrapper[4932]: I0218 20:21:01.902448 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"406a2738-127b-4d6d-8de4-3f5d88896b4c","Type":"ContainerStarted","Data":"d50ab465ae80ff5724a700d003136a7902824fcb39d6203d36f12ac40ffa0cad"} Feb 18 20:21:05 crc kubenswrapper[4932]: I0218 20:21:05.963356 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"406a2738-127b-4d6d-8de4-3f5d88896b4c","Type":"ContainerStarted","Data":"a79ef73103ea68b3eabdc169f566d82ef76b1733f0787fad30b43241c052da85"} Feb 18 20:21:05 crc kubenswrapper[4932]: I0218 20:21:05.963665 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"406a2738-127b-4d6d-8de4-3f5d88896b4c","Type":"ContainerStarted","Data":"0139950794a616f732ea759c1c7878a78bc2714d801bde2d8066230e81c5ffde"} Feb 18 20:21:06 crc kubenswrapper[4932]: I0218 20:21:06.011159 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=20.011137692 podStartE2EDuration="20.011137692s" podCreationTimestamp="2026-02-18 20:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 20:21:06.003320249 +0000 UTC m=+2829.585275134" watchObservedRunningTime="2026-02-18 20:21:06.011137692 +0000 UTC m=+2829.593092557" Feb 18 20:21:07 crc kubenswrapper[4932]: I0218 20:21:07.219553 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 20:21:17 crc kubenswrapper[4932]: I0218 20:21:17.218690 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 18 20:21:17 crc kubenswrapper[4932]: I0218 20:21:17.227913 4932 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 18 20:21:18 crc kubenswrapper[4932]: I0218 20:21:18.139581 4932 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.339272 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.341539 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.346014 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.346309 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-bccj2" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.348799 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.349450 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.361406 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.467925 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.467977 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fhls\" (UniqueName: \"kubernetes.io/projected/2947758a-fd4b-4a4a-956a-41fefa7296a0-kube-api-access-7fhls\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.468039 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.468111 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.468162 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2947758a-fd4b-4a4a-956a-41fefa7296a0-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.468296 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.468323 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2947758a-fd4b-4a4a-956a-41fefa7296a0-config-data\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.468378 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2947758a-fd4b-4a4a-956a-41fefa7296a0-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.468443 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2947758a-fd4b-4a4a-956a-41fefa7296a0-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.570566 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2947758a-fd4b-4a4a-956a-41fefa7296a0-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.570678 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.570707 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2947758a-fd4b-4a4a-956a-41fefa7296a0-config-data\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.570755 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2947758a-fd4b-4a4a-956a-41fefa7296a0-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.570823 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2947758a-fd4b-4a4a-956a-41fefa7296a0-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.570867 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.570894 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fhls\" (UniqueName: \"kubernetes.io/projected/2947758a-fd4b-4a4a-956a-41fefa7296a0-kube-api-access-7fhls\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.570946 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.570979 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.571037 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2947758a-fd4b-4a4a-956a-41fefa7296a0-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.571319 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2947758a-fd4b-4a4a-956a-41fefa7296a0-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.571579 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.572376 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2947758a-fd4b-4a4a-956a-41fefa7296a0-config-data\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.573101 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2947758a-fd4b-4a4a-956a-41fefa7296a0-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.577797 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.579359 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.579421 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.591117 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fhls\" (UniqueName: \"kubernetes.io/projected/2947758a-fd4b-4a4a-956a-41fefa7296a0-kube-api-access-7fhls\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.606982 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " pod="openstack/tempest-tests-tempest" Feb 18 20:21:30 crc kubenswrapper[4932]: I0218 20:21:30.668509 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 20:21:31 crc kubenswrapper[4932]: I0218 20:21:31.131106 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 20:21:31 crc kubenswrapper[4932]: I0218 20:21:31.298696 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2947758a-fd4b-4a4a-956a-41fefa7296a0","Type":"ContainerStarted","Data":"8c283e884b8f80bf01f3a12151451c0769e806057cd6f8d4c57d644f30012eb1"} Feb 18 20:21:41 crc kubenswrapper[4932]: I0218 20:21:41.396801 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2947758a-fd4b-4a4a-956a-41fefa7296a0","Type":"ContainerStarted","Data":"b8cc57bfeb38d618854d30ad2a0303534b4a0674c797bd1d7dcd4db1e8159186"} Feb 18 20:21:41 crc kubenswrapper[4932]: I0218 20:21:41.431471 4932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.392070405 podStartE2EDuration="12.431451572s" podCreationTimestamp="2026-02-18 20:21:29 +0000 UTC" firstStartedPulling="2026-02-18 20:21:31.133567107 +0000 UTC m=+2854.715521952" lastFinishedPulling="2026-02-18 20:21:40.172948274 +0000 UTC m=+2863.754903119" observedRunningTime="2026-02-18 20:21:41.424036199 +0000 UTC m=+2865.005991104" watchObservedRunningTime="2026-02-18 20:21:41.431451572 +0000 UTC m=+2865.013406427" Feb 18 20:22:27 crc kubenswrapper[4932]: I0218 20:22:27.606134 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:22:27 crc kubenswrapper[4932]: I0218 20:22:27.606591 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:22:57 crc kubenswrapper[4932]: I0218 20:22:57.605910 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:22:57 crc kubenswrapper[4932]: I0218 20:22:57.606919 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:23:27 crc kubenswrapper[4932]: I0218 20:23:27.607349 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:23:27 crc kubenswrapper[4932]: I0218 20:23:27.608776 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:23:27 crc kubenswrapper[4932]: I0218 20:23:27.608862 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 20:23:27 crc kubenswrapper[4932]: I0218 20:23:27.610407 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:23:27 crc kubenswrapper[4932]: I0218 20:23:27.610559 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" gracePeriod=600 Feb 18 20:23:27 crc kubenswrapper[4932]: E0218 20:23:27.746241 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:23:28 crc kubenswrapper[4932]: I0218 20:23:28.704880 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" exitCode=0 Feb 18 20:23:28 crc kubenswrapper[4932]: I0218 20:23:28.704991 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72"} Feb 18 20:23:28 crc kubenswrapper[4932]: I0218 20:23:28.705214 4932 scope.go:117] "RemoveContainer" containerID="0d641d1880a050cdf1021a445fa79e88f90ca1f340fe0f38bc6a038f7b103aec" Feb 18 20:23:28 crc kubenswrapper[4932]: I0218 20:23:28.705839 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:23:28 crc kubenswrapper[4932]: E0218 20:23:28.706384 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:23:40 crc kubenswrapper[4932]: I0218 20:23:40.179970 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:23:40 crc kubenswrapper[4932]: E0218 20:23:40.182008 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:23:53 crc kubenswrapper[4932]: I0218 20:23:53.180771 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:23:53 crc kubenswrapper[4932]: E0218 20:23:53.181887 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:24:05 crc kubenswrapper[4932]: I0218 20:24:05.233084 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:24:05 crc kubenswrapper[4932]: E0218 20:24:05.233857 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:24:16 crc kubenswrapper[4932]: I0218 20:24:16.180023 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:24:16 crc kubenswrapper[4932]: E0218 20:24:16.181269 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:24:30 crc kubenswrapper[4932]: I0218 20:24:30.179936 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:24:30 crc kubenswrapper[4932]: E0218 20:24:30.180661 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:24:43 crc kubenswrapper[4932]: I0218 20:24:43.179292 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:24:43 crc kubenswrapper[4932]: E0218 20:24:43.180105 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:24:58 crc kubenswrapper[4932]: I0218 20:24:58.179226 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:24:58 crc kubenswrapper[4932]: E0218 20:24:58.180138 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:25:10 crc kubenswrapper[4932]: I0218 20:25:10.179332 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:25:10 crc kubenswrapper[4932]: E0218 20:25:10.180450 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:25:25 crc kubenswrapper[4932]: I0218 20:25:25.180218 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:25:25 crc kubenswrapper[4932]: E0218 20:25:25.181294 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:25:38 crc kubenswrapper[4932]: I0218 20:25:38.178972 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:25:38 crc kubenswrapper[4932]: E0218 20:25:38.179676 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:25:51 crc kubenswrapper[4932]: I0218 20:25:51.179753 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:25:51 crc kubenswrapper[4932]: E0218 20:25:51.180750 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:26:03 crc kubenswrapper[4932]: I0218 20:26:03.180250 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:26:03 crc kubenswrapper[4932]: E0218 20:26:03.181303 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:26:15 crc kubenswrapper[4932]: I0218 20:26:15.179617 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:26:15 crc kubenswrapper[4932]: E0218 20:26:15.180619 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:26:29 crc kubenswrapper[4932]: I0218 20:26:29.180064 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:26:29 crc kubenswrapper[4932]: E0218 20:26:29.181222 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:26:43 crc kubenswrapper[4932]: I0218 20:26:43.179722 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:26:43 crc kubenswrapper[4932]: E0218 20:26:43.180843 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:26:54 crc kubenswrapper[4932]: I0218 20:26:54.179940 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:26:54 crc kubenswrapper[4932]: E0218 20:26:54.181963 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:27:06 crc kubenswrapper[4932]: I0218 20:27:06.179981 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:27:06 crc kubenswrapper[4932]: E0218 20:27:06.181110 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:27:18 crc kubenswrapper[4932]: I0218 20:27:18.179953 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:27:18 crc kubenswrapper[4932]: E0218 20:27:18.180771 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:27:29 crc kubenswrapper[4932]: I0218 20:27:29.179963 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:27:29 crc kubenswrapper[4932]: E0218 20:27:29.180645 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:27:44 crc kubenswrapper[4932]: I0218 20:27:44.179718 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:27:44 crc kubenswrapper[4932]: E0218 20:27:44.180941 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:27:55 crc kubenswrapper[4932]: I0218 20:27:55.006890 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jhb45"] Feb 18 20:27:55 crc kubenswrapper[4932]: I0218 20:27:55.010998 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jhb45" Feb 18 20:27:55 crc kubenswrapper[4932]: I0218 20:27:55.034671 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jhb45"] Feb 18 20:27:55 crc kubenswrapper[4932]: I0218 20:27:55.062300 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a69dedd-7666-4739-af80-59d37eedf9b1-catalog-content\") pod \"certified-operators-jhb45\" (UID: \"1a69dedd-7666-4739-af80-59d37eedf9b1\") " pod="openshift-marketplace/certified-operators-jhb45" Feb 18 20:27:55 crc kubenswrapper[4932]: I0218 20:27:55.062499 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsc85\" (UniqueName: \"kubernetes.io/projected/1a69dedd-7666-4739-af80-59d37eedf9b1-kube-api-access-zsc85\") pod \"certified-operators-jhb45\" (UID: \"1a69dedd-7666-4739-af80-59d37eedf9b1\") " pod="openshift-marketplace/certified-operators-jhb45" Feb 18 20:27:55 crc kubenswrapper[4932]: I0218 20:27:55.062739 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a69dedd-7666-4739-af80-59d37eedf9b1-utilities\") pod \"certified-operators-jhb45\" (UID: \"1a69dedd-7666-4739-af80-59d37eedf9b1\") " pod="openshift-marketplace/certified-operators-jhb45" Feb 18 20:27:55 crc kubenswrapper[4932]: I0218 20:27:55.164397 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a69dedd-7666-4739-af80-59d37eedf9b1-utilities\") pod \"certified-operators-jhb45\" (UID: \"1a69dedd-7666-4739-af80-59d37eedf9b1\") " pod="openshift-marketplace/certified-operators-jhb45" Feb 18 20:27:55 crc kubenswrapper[4932]: I0218 20:27:55.164519 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a69dedd-7666-4739-af80-59d37eedf9b1-catalog-content\") pod \"certified-operators-jhb45\" (UID: \"1a69dedd-7666-4739-af80-59d37eedf9b1\") " pod="openshift-marketplace/certified-operators-jhb45" Feb 18 20:27:55 crc kubenswrapper[4932]: I0218 20:27:55.164572 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsc85\" (UniqueName: \"kubernetes.io/projected/1a69dedd-7666-4739-af80-59d37eedf9b1-kube-api-access-zsc85\") pod \"certified-operators-jhb45\" (UID: \"1a69dedd-7666-4739-af80-59d37eedf9b1\") " pod="openshift-marketplace/certified-operators-jhb45" Feb 18 20:27:55 crc kubenswrapper[4932]: I0218 20:27:55.164899 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a69dedd-7666-4739-af80-59d37eedf9b1-utilities\") pod \"certified-operators-jhb45\" (UID: \"1a69dedd-7666-4739-af80-59d37eedf9b1\") " pod="openshift-marketplace/certified-operators-jhb45" Feb 18 20:27:55 crc kubenswrapper[4932]: I0218 20:27:55.164995 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a69dedd-7666-4739-af80-59d37eedf9b1-catalog-content\") pod \"certified-operators-jhb45\" (UID: \"1a69dedd-7666-4739-af80-59d37eedf9b1\") " pod="openshift-marketplace/certified-operators-jhb45" Feb 18 20:27:55 crc kubenswrapper[4932]: I0218 20:27:55.208660 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsc85\" (UniqueName: \"kubernetes.io/projected/1a69dedd-7666-4739-af80-59d37eedf9b1-kube-api-access-zsc85\") pod \"certified-operators-jhb45\" (UID: \"1a69dedd-7666-4739-af80-59d37eedf9b1\") " pod="openshift-marketplace/certified-operators-jhb45" Feb 18 20:27:55 crc kubenswrapper[4932]: I0218 20:27:55.352798 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jhb45" Feb 18 20:27:55 crc kubenswrapper[4932]: I0218 20:27:55.764793 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jhb45"] Feb 18 20:27:56 crc kubenswrapper[4932]: I0218 20:27:56.179607 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:27:56 crc kubenswrapper[4932]: E0218 20:27:56.180011 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:27:56 crc kubenswrapper[4932]: I0218 20:27:56.554719 4932 generic.go:334] "Generic (PLEG): container finished" podID="1a69dedd-7666-4739-af80-59d37eedf9b1" containerID="d70b17d5eed673b4dc82174e8289c879ab43a9e04b99bb7ee050e01a1fe688b6" exitCode=0 Feb 18 20:27:56 crc kubenswrapper[4932]: I0218 20:27:56.554814 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhb45" event={"ID":"1a69dedd-7666-4739-af80-59d37eedf9b1","Type":"ContainerDied","Data":"d70b17d5eed673b4dc82174e8289c879ab43a9e04b99bb7ee050e01a1fe688b6"} Feb 18 20:27:56 crc kubenswrapper[4932]: I0218 20:27:56.555045 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhb45" event={"ID":"1a69dedd-7666-4739-af80-59d37eedf9b1","Type":"ContainerStarted","Data":"39755ade2378128d460b755d127aefa3640d1c9e71491bc80b1a7158a1c5985c"} Feb 18 20:27:56 crc kubenswrapper[4932]: I0218 20:27:56.564570 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:27:57 crc kubenswrapper[4932]: I0218 20:27:57.566705 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhb45" event={"ID":"1a69dedd-7666-4739-af80-59d37eedf9b1","Type":"ContainerStarted","Data":"4fdcd4c8b9327eed859283165a216726bc40d559357de1301d6dd45413fabca8"} Feb 18 20:27:59 crc kubenswrapper[4932]: I0218 20:27:59.588973 4932 generic.go:334] "Generic (PLEG): container finished" podID="1a69dedd-7666-4739-af80-59d37eedf9b1" containerID="4fdcd4c8b9327eed859283165a216726bc40d559357de1301d6dd45413fabca8" exitCode=0 Feb 18 20:27:59 crc kubenswrapper[4932]: I0218 20:27:59.589056 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhb45" event={"ID":"1a69dedd-7666-4739-af80-59d37eedf9b1","Type":"ContainerDied","Data":"4fdcd4c8b9327eed859283165a216726bc40d559357de1301d6dd45413fabca8"} Feb 18 20:27:59 crc kubenswrapper[4932]: E0218 20:27:59.986784 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:27:59 crc kubenswrapper[4932]: E0218 20:27:59.986950 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsc85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jhb45_openshift-marketplace(1a69dedd-7666-4739-af80-59d37eedf9b1): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:27:59 crc kubenswrapper[4932]: E0218 20:27:59.988133 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:28:11 crc kubenswrapper[4932]: I0218 20:28:11.180096 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:28:11 crc kubenswrapper[4932]: E0218 20:28:11.181263 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:28:12 crc kubenswrapper[4932]: E0218 20:28:12.079294 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:28:12 crc kubenswrapper[4932]: E0218 20:28:12.079732 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsc85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jhb45_openshift-marketplace(1a69dedd-7666-4739-af80-59d37eedf9b1): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:28:12 crc kubenswrapper[4932]: E0218 20:28:12.081261 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:28:24 crc kubenswrapper[4932]: E0218 20:28:24.186220 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:28:25 crc kubenswrapper[4932]: I0218 20:28:25.179163 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:28:25 crc kubenswrapper[4932]: E0218 20:28:25.179865 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:28:38 crc kubenswrapper[4932]: I0218 20:28:38.195161 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:28:38 crc kubenswrapper[4932]: E0218 20:28:38.560851 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:28:38 crc kubenswrapper[4932]: E0218 20:28:38.561246 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsc85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jhb45_openshift-marketplace(1a69dedd-7666-4739-af80-59d37eedf9b1): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:28:38 crc kubenswrapper[4932]: E0218 20:28:38.562718 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:28:39 crc kubenswrapper[4932]: I0218 20:28:39.271539 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"78c9f9fc09b3668d1f7c86135c8c1bd3ce72f15c084caed76c5a646c505ebcef"} Feb 18 20:28:52 crc kubenswrapper[4932]: E0218 20:28:52.183322 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:29:07 crc kubenswrapper[4932]: E0218 20:29:07.190513 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:29:20 crc kubenswrapper[4932]: E0218 20:29:20.638627 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:29:20 crc kubenswrapper[4932]: E0218 20:29:20.639262 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsc85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jhb45_openshift-marketplace(1a69dedd-7666-4739-af80-59d37eedf9b1): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:29:20 crc kubenswrapper[4932]: E0218 20:29:20.640502 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:29:33 crc kubenswrapper[4932]: E0218 20:29:33.185204 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:29:46 crc kubenswrapper[4932]: E0218 20:29:46.183373 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:29:58 crc kubenswrapper[4932]: E0218 20:29:58.182725 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:30:00 crc kubenswrapper[4932]: I0218 20:30:00.164206 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t"] Feb 18 20:30:00 crc kubenswrapper[4932]: I0218 20:30:00.166065 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t" Feb 18 20:30:00 crc kubenswrapper[4932]: I0218 20:30:00.169864 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 20:30:00 crc kubenswrapper[4932]: I0218 20:30:00.169927 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 20:30:00 crc kubenswrapper[4932]: I0218 20:30:00.182041 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t"] Feb 18 20:30:00 crc kubenswrapper[4932]: I0218 20:30:00.336530 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxx2d\" (UniqueName: \"kubernetes.io/projected/b626f706-b9f8-4e4b-9230-4af819e3faff-kube-api-access-bxx2d\") pod \"collect-profiles-29524110-2pm5t\" (UID: \"b626f706-b9f8-4e4b-9230-4af819e3faff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t" Feb 18 20:30:00 crc kubenswrapper[4932]: I0218 20:30:00.337268 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b626f706-b9f8-4e4b-9230-4af819e3faff-config-volume\") pod \"collect-profiles-29524110-2pm5t\" (UID: \"b626f706-b9f8-4e4b-9230-4af819e3faff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t" Feb 18 20:30:00 crc kubenswrapper[4932]: I0218 20:30:00.337458 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b626f706-b9f8-4e4b-9230-4af819e3faff-secret-volume\") pod \"collect-profiles-29524110-2pm5t\" (UID: \"b626f706-b9f8-4e4b-9230-4af819e3faff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t" Feb 18 20:30:00 crc kubenswrapper[4932]: I0218 20:30:00.440213 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxx2d\" (UniqueName: \"kubernetes.io/projected/b626f706-b9f8-4e4b-9230-4af819e3faff-kube-api-access-bxx2d\") pod \"collect-profiles-29524110-2pm5t\" (UID: \"b626f706-b9f8-4e4b-9230-4af819e3faff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t" Feb 18 20:30:00 crc kubenswrapper[4932]: I0218 20:30:00.440564 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b626f706-b9f8-4e4b-9230-4af819e3faff-config-volume\") pod \"collect-profiles-29524110-2pm5t\" (UID: \"b626f706-b9f8-4e4b-9230-4af819e3faff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t" Feb 18 20:30:00 crc kubenswrapper[4932]: I0218 20:30:00.442083 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b626f706-b9f8-4e4b-9230-4af819e3faff-config-volume\") pod \"collect-profiles-29524110-2pm5t\" (UID: \"b626f706-b9f8-4e4b-9230-4af819e3faff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t" Feb 18 20:30:00 crc kubenswrapper[4932]: I0218 20:30:00.442270 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b626f706-b9f8-4e4b-9230-4af819e3faff-secret-volume\") pod \"collect-profiles-29524110-2pm5t\" (UID: \"b626f706-b9f8-4e4b-9230-4af819e3faff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t" Feb 18 20:30:00 crc kubenswrapper[4932]: I0218 20:30:00.451788 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b626f706-b9f8-4e4b-9230-4af819e3faff-secret-volume\") pod \"collect-profiles-29524110-2pm5t\" (UID: \"b626f706-b9f8-4e4b-9230-4af819e3faff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t" Feb 18 20:30:00 crc kubenswrapper[4932]: I0218 20:30:00.467830 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxx2d\" (UniqueName: \"kubernetes.io/projected/b626f706-b9f8-4e4b-9230-4af819e3faff-kube-api-access-bxx2d\") pod \"collect-profiles-29524110-2pm5t\" (UID: \"b626f706-b9f8-4e4b-9230-4af819e3faff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t" Feb 18 20:30:00 crc kubenswrapper[4932]: I0218 20:30:00.492589 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t" Feb 18 20:30:01 crc kubenswrapper[4932]: I0218 20:30:01.040199 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t"] Feb 18 20:30:01 crc kubenswrapper[4932]: W0218 20:30:01.046601 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb626f706_b9f8_4e4b_9230_4af819e3faff.slice/crio-2c924ddfba63066bdf3da8d9735690d1e77dde0a42ed19ccd0af37bec44080dd WatchSource:0}: Error finding container 2c924ddfba63066bdf3da8d9735690d1e77dde0a42ed19ccd0af37bec44080dd: Status 404 returned error can't find the container with id 2c924ddfba63066bdf3da8d9735690d1e77dde0a42ed19ccd0af37bec44080dd Feb 18 20:30:01 crc kubenswrapper[4932]: I0218 20:30:01.298253 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t" event={"ID":"b626f706-b9f8-4e4b-9230-4af819e3faff","Type":"ContainerStarted","Data":"2c924ddfba63066bdf3da8d9735690d1e77dde0a42ed19ccd0af37bec44080dd"} Feb 18 20:30:02 crc kubenswrapper[4932]: I0218 20:30:02.311463 4932 generic.go:334] "Generic (PLEG): container finished" podID="b626f706-b9f8-4e4b-9230-4af819e3faff" containerID="a28ea4ba70aaf87a1e424c61365edff478f3b92e18d9a5358cbec200c9470566" exitCode=0 Feb 18 20:30:02 crc kubenswrapper[4932]: I0218 20:30:02.311587 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t" event={"ID":"b626f706-b9f8-4e4b-9230-4af819e3faff","Type":"ContainerDied","Data":"a28ea4ba70aaf87a1e424c61365edff478f3b92e18d9a5358cbec200c9470566"} Feb 18 20:30:03 crc kubenswrapper[4932]: I0218 20:30:03.798792 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t" Feb 18 20:30:03 crc kubenswrapper[4932]: I0218 20:30:03.966952 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b626f706-b9f8-4e4b-9230-4af819e3faff-secret-volume\") pod \"b626f706-b9f8-4e4b-9230-4af819e3faff\" (UID: \"b626f706-b9f8-4e4b-9230-4af819e3faff\") " Feb 18 20:30:03 crc kubenswrapper[4932]: I0218 20:30:03.967073 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b626f706-b9f8-4e4b-9230-4af819e3faff-config-volume\") pod \"b626f706-b9f8-4e4b-9230-4af819e3faff\" (UID: \"b626f706-b9f8-4e4b-9230-4af819e3faff\") " Feb 18 20:30:03 crc kubenswrapper[4932]: I0218 20:30:03.967152 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxx2d\" (UniqueName: \"kubernetes.io/projected/b626f706-b9f8-4e4b-9230-4af819e3faff-kube-api-access-bxx2d\") pod \"b626f706-b9f8-4e4b-9230-4af819e3faff\" (UID: \"b626f706-b9f8-4e4b-9230-4af819e3faff\") " Feb 18 20:30:03 crc kubenswrapper[4932]: I0218 20:30:03.968272 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b626f706-b9f8-4e4b-9230-4af819e3faff-config-volume" (OuterVolumeSpecName: "config-volume") pod "b626f706-b9f8-4e4b-9230-4af819e3faff" (UID: "b626f706-b9f8-4e4b-9230-4af819e3faff"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:30:03 crc kubenswrapper[4932]: I0218 20:30:03.983680 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b626f706-b9f8-4e4b-9230-4af819e3faff-kube-api-access-bxx2d" (OuterVolumeSpecName: "kube-api-access-bxx2d") pod "b626f706-b9f8-4e4b-9230-4af819e3faff" (UID: "b626f706-b9f8-4e4b-9230-4af819e3faff"). InnerVolumeSpecName "kube-api-access-bxx2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:30:03 crc kubenswrapper[4932]: I0218 20:30:03.983923 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b626f706-b9f8-4e4b-9230-4af819e3faff-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b626f706-b9f8-4e4b-9230-4af819e3faff" (UID: "b626f706-b9f8-4e4b-9230-4af819e3faff"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:30:04 crc kubenswrapper[4932]: I0218 20:30:04.069874 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxx2d\" (UniqueName: \"kubernetes.io/projected/b626f706-b9f8-4e4b-9230-4af819e3faff-kube-api-access-bxx2d\") on node \"crc\" DevicePath \"\"" Feb 18 20:30:04 crc kubenswrapper[4932]: I0218 20:30:04.069910 4932 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b626f706-b9f8-4e4b-9230-4af819e3faff-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:30:04 crc kubenswrapper[4932]: I0218 20:30:04.069922 4932 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b626f706-b9f8-4e4b-9230-4af819e3faff-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:30:04 crc kubenswrapper[4932]: I0218 20:30:04.338285 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t" event={"ID":"b626f706-b9f8-4e4b-9230-4af819e3faff","Type":"ContainerDied","Data":"2c924ddfba63066bdf3da8d9735690d1e77dde0a42ed19ccd0af37bec44080dd"} Feb 18 20:30:04 crc kubenswrapper[4932]: I0218 20:30:04.338345 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c924ddfba63066bdf3da8d9735690d1e77dde0a42ed19ccd0af37bec44080dd" Feb 18 20:30:04 crc kubenswrapper[4932]: I0218 20:30:04.338348 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-2pm5t" Feb 18 20:30:04 crc kubenswrapper[4932]: I0218 20:30:04.899525 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz"] Feb 18 20:30:04 crc kubenswrapper[4932]: I0218 20:30:04.900566 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524065-tcbfz"] Feb 18 20:30:05 crc kubenswrapper[4932]: I0218 20:30:05.192136 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84719922-9618-4293-8f4a-fb525f37eca6" path="/var/lib/kubelet/pods/84719922-9618-4293-8f4a-fb525f37eca6/volumes" Feb 18 20:30:06 crc kubenswrapper[4932]: I0218 20:30:06.540353 4932 scope.go:117] "RemoveContainer" containerID="80752bb80b5cb6dad23a49c747590ff84b2c23ef678e45c05c4cf091b2c9b0a9" Feb 18 20:30:12 crc kubenswrapper[4932]: E0218 20:30:12.183187 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:30:25 crc kubenswrapper[4932]: E0218 20:30:25.200584 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:30:38 crc kubenswrapper[4932]: E0218 20:30:38.182158 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:30:47 crc kubenswrapper[4932]: I0218 20:30:47.029793 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pkmxd"] Feb 18 20:30:47 crc kubenswrapper[4932]: E0218 20:30:47.031514 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b626f706-b9f8-4e4b-9230-4af819e3faff" containerName="collect-profiles" Feb 18 20:30:47 crc kubenswrapper[4932]: I0218 20:30:47.031538 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="b626f706-b9f8-4e4b-9230-4af819e3faff" containerName="collect-profiles" Feb 18 20:30:47 crc kubenswrapper[4932]: I0218 20:30:47.031866 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="b626f706-b9f8-4e4b-9230-4af819e3faff" containerName="collect-profiles" Feb 18 20:30:47 crc kubenswrapper[4932]: I0218 20:30:47.034359 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkmxd" Feb 18 20:30:47 crc kubenswrapper[4932]: I0218 20:30:47.046698 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pkmxd"] Feb 18 20:30:47 crc kubenswrapper[4932]: I0218 20:30:47.157848 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c30675a-a3c0-497c-804a-42c3640846eb-catalog-content\") pod \"redhat-operators-pkmxd\" (UID: \"9c30675a-a3c0-497c-804a-42c3640846eb\") " pod="openshift-marketplace/redhat-operators-pkmxd" Feb 18 20:30:47 crc kubenswrapper[4932]: I0218 20:30:47.158336 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c30675a-a3c0-497c-804a-42c3640846eb-utilities\") pod \"redhat-operators-pkmxd\" (UID: \"9c30675a-a3c0-497c-804a-42c3640846eb\") " pod="openshift-marketplace/redhat-operators-pkmxd" Feb 18 20:30:47 crc kubenswrapper[4932]: I0218 20:30:47.158424 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r77nm\" (UniqueName: \"kubernetes.io/projected/9c30675a-a3c0-497c-804a-42c3640846eb-kube-api-access-r77nm\") pod \"redhat-operators-pkmxd\" (UID: \"9c30675a-a3c0-497c-804a-42c3640846eb\") " pod="openshift-marketplace/redhat-operators-pkmxd" Feb 18 20:30:47 crc kubenswrapper[4932]: I0218 20:30:47.261227 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c30675a-a3c0-497c-804a-42c3640846eb-utilities\") pod \"redhat-operators-pkmxd\" (UID: \"9c30675a-a3c0-497c-804a-42c3640846eb\") " pod="openshift-marketplace/redhat-operators-pkmxd" Feb 18 20:30:47 crc kubenswrapper[4932]: I0218 20:30:47.261372 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r77nm\" (UniqueName: \"kubernetes.io/projected/9c30675a-a3c0-497c-804a-42c3640846eb-kube-api-access-r77nm\") pod \"redhat-operators-pkmxd\" (UID: \"9c30675a-a3c0-497c-804a-42c3640846eb\") " pod="openshift-marketplace/redhat-operators-pkmxd" Feb 18 20:30:47 crc kubenswrapper[4932]: I0218 20:30:47.261604 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c30675a-a3c0-497c-804a-42c3640846eb-catalog-content\") pod \"redhat-operators-pkmxd\" (UID: \"9c30675a-a3c0-497c-804a-42c3640846eb\") " pod="openshift-marketplace/redhat-operators-pkmxd" Feb 18 20:30:47 crc kubenswrapper[4932]: I0218 20:30:47.261981 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c30675a-a3c0-497c-804a-42c3640846eb-utilities\") pod \"redhat-operators-pkmxd\" (UID: \"9c30675a-a3c0-497c-804a-42c3640846eb\") " pod="openshift-marketplace/redhat-operators-pkmxd" Feb 18 20:30:47 crc kubenswrapper[4932]: I0218 20:30:47.262429 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c30675a-a3c0-497c-804a-42c3640846eb-catalog-content\") pod \"redhat-operators-pkmxd\" (UID: \"9c30675a-a3c0-497c-804a-42c3640846eb\") " pod="openshift-marketplace/redhat-operators-pkmxd" Feb 18 20:30:47 crc kubenswrapper[4932]: I0218 20:30:47.305318 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r77nm\" (UniqueName: \"kubernetes.io/projected/9c30675a-a3c0-497c-804a-42c3640846eb-kube-api-access-r77nm\") pod \"redhat-operators-pkmxd\" (UID: \"9c30675a-a3c0-497c-804a-42c3640846eb\") " pod="openshift-marketplace/redhat-operators-pkmxd" Feb 18 20:30:47 crc kubenswrapper[4932]: I0218 20:30:47.387261 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkmxd" Feb 18 20:30:47 crc kubenswrapper[4932]: I0218 20:30:47.919283 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pkmxd"] Feb 18 20:30:48 crc kubenswrapper[4932]: I0218 20:30:48.885546 4932 generic.go:334] "Generic (PLEG): container finished" podID="9c30675a-a3c0-497c-804a-42c3640846eb" containerID="16369a6bddfc3696f7afc6cc93dd9e7c1afad8ee1bd2329bd714895abb808f8c" exitCode=0 Feb 18 20:30:48 crc kubenswrapper[4932]: I0218 20:30:48.885604 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkmxd" event={"ID":"9c30675a-a3c0-497c-804a-42c3640846eb","Type":"ContainerDied","Data":"16369a6bddfc3696f7afc6cc93dd9e7c1afad8ee1bd2329bd714895abb808f8c"} Feb 18 20:30:48 crc kubenswrapper[4932]: I0218 20:30:48.886377 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkmxd" event={"ID":"9c30675a-a3c0-497c-804a-42c3640846eb","Type":"ContainerStarted","Data":"7d415574aee0d64a48ca3732b58be3fb90a60251f41cdec722659c50ef2bf823"} Feb 18 20:30:49 crc kubenswrapper[4932]: E0218 20:30:49.909552 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:30:49 crc kubenswrapper[4932]: E0218 20:30:49.909703 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r77nm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pkmxd_openshift-marketplace(9c30675a-a3c0-497c-804a-42c3640846eb): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:30:49 crc kubenswrapper[4932]: E0218 20:30:49.911048 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:30:50 crc kubenswrapper[4932]: E0218 20:30:50.721475 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:30:50 crc kubenswrapper[4932]: E0218 20:30:50.722536 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsc85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jhb45_openshift-marketplace(1a69dedd-7666-4739-af80-59d37eedf9b1): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:30:50 crc kubenswrapper[4932]: E0218 20:30:50.724604 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:30:50 crc kubenswrapper[4932]: E0218 20:30:50.911157 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:30:51 crc kubenswrapper[4932]: I0218 20:30:51.818739 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s2grr"] Feb 18 20:30:51 crc kubenswrapper[4932]: I0218 20:30:51.822823 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s2grr" Feb 18 20:30:51 crc kubenswrapper[4932]: I0218 20:30:51.853258 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s2grr"] Feb 18 20:30:51 crc kubenswrapper[4932]: I0218 20:30:51.980452 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l46c\" (UniqueName: \"kubernetes.io/projected/088aaa53-25ca-48c3-a904-2af0f07e8c2b-kube-api-access-9l46c\") pod \"redhat-marketplace-s2grr\" (UID: \"088aaa53-25ca-48c3-a904-2af0f07e8c2b\") " pod="openshift-marketplace/redhat-marketplace-s2grr" Feb 18 20:30:51 crc kubenswrapper[4932]: I0218 20:30:51.980682 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/088aaa53-25ca-48c3-a904-2af0f07e8c2b-utilities\") pod \"redhat-marketplace-s2grr\" (UID: \"088aaa53-25ca-48c3-a904-2af0f07e8c2b\") " pod="openshift-marketplace/redhat-marketplace-s2grr" Feb 18 20:30:51 crc kubenswrapper[4932]: I0218 20:30:51.980713 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/088aaa53-25ca-48c3-a904-2af0f07e8c2b-catalog-content\") pod \"redhat-marketplace-s2grr\" (UID: \"088aaa53-25ca-48c3-a904-2af0f07e8c2b\") " pod="openshift-marketplace/redhat-marketplace-s2grr" Feb 18 20:30:52 crc kubenswrapper[4932]: I0218 20:30:52.082288 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l46c\" (UniqueName: \"kubernetes.io/projected/088aaa53-25ca-48c3-a904-2af0f07e8c2b-kube-api-access-9l46c\") pod \"redhat-marketplace-s2grr\" (UID: \"088aaa53-25ca-48c3-a904-2af0f07e8c2b\") " pod="openshift-marketplace/redhat-marketplace-s2grr" Feb 18 20:30:52 crc kubenswrapper[4932]: I0218 20:30:52.082427 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/088aaa53-25ca-48c3-a904-2af0f07e8c2b-utilities\") pod \"redhat-marketplace-s2grr\" (UID: \"088aaa53-25ca-48c3-a904-2af0f07e8c2b\") " pod="openshift-marketplace/redhat-marketplace-s2grr" Feb 18 20:30:52 crc kubenswrapper[4932]: I0218 20:30:52.082451 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/088aaa53-25ca-48c3-a904-2af0f07e8c2b-catalog-content\") pod \"redhat-marketplace-s2grr\" (UID: \"088aaa53-25ca-48c3-a904-2af0f07e8c2b\") " pod="openshift-marketplace/redhat-marketplace-s2grr" Feb 18 20:30:52 crc kubenswrapper[4932]: I0218 20:30:52.083230 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/088aaa53-25ca-48c3-a904-2af0f07e8c2b-catalog-content\") pod \"redhat-marketplace-s2grr\" (UID: \"088aaa53-25ca-48c3-a904-2af0f07e8c2b\") " pod="openshift-marketplace/redhat-marketplace-s2grr" Feb 18 20:30:52 crc kubenswrapper[4932]: I0218 20:30:52.083330 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/088aaa53-25ca-48c3-a904-2af0f07e8c2b-utilities\") pod \"redhat-marketplace-s2grr\" (UID: \"088aaa53-25ca-48c3-a904-2af0f07e8c2b\") " pod="openshift-marketplace/redhat-marketplace-s2grr" Feb 18 20:30:52 crc kubenswrapper[4932]: I0218 20:30:52.104291 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l46c\" (UniqueName: \"kubernetes.io/projected/088aaa53-25ca-48c3-a904-2af0f07e8c2b-kube-api-access-9l46c\") pod \"redhat-marketplace-s2grr\" (UID: \"088aaa53-25ca-48c3-a904-2af0f07e8c2b\") " pod="openshift-marketplace/redhat-marketplace-s2grr" Feb 18 20:30:52 crc kubenswrapper[4932]: I0218 20:30:52.176628 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s2grr" Feb 18 20:30:52 crc kubenswrapper[4932]: W0218 20:30:52.644231 4932 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod088aaa53_25ca_48c3_a904_2af0f07e8c2b.slice/crio-66624ef687c548b92a5c3ac02a2432d31794e6c72c153d725c6352fff647fd81 WatchSource:0}: Error finding container 66624ef687c548b92a5c3ac02a2432d31794e6c72c153d725c6352fff647fd81: Status 404 returned error can't find the container with id 66624ef687c548b92a5c3ac02a2432d31794e6c72c153d725c6352fff647fd81 Feb 18 20:30:52 crc kubenswrapper[4932]: I0218 20:30:52.644610 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s2grr"] Feb 18 20:30:52 crc kubenswrapper[4932]: I0218 20:30:52.933952 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s2grr" event={"ID":"088aaa53-25ca-48c3-a904-2af0f07e8c2b","Type":"ContainerStarted","Data":"878fe673902cd80ce6ca092b6e629d1eddb9388f8d8bc0403e169b6a7dcb2669"} Feb 18 20:30:52 crc kubenswrapper[4932]: I0218 20:30:52.934270 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s2grr" event={"ID":"088aaa53-25ca-48c3-a904-2af0f07e8c2b","Type":"ContainerStarted","Data":"66624ef687c548b92a5c3ac02a2432d31794e6c72c153d725c6352fff647fd81"} Feb 18 20:30:53 crc kubenswrapper[4932]: I0218 20:30:53.981160 4932 generic.go:334] "Generic (PLEG): container finished" podID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" containerID="878fe673902cd80ce6ca092b6e629d1eddb9388f8d8bc0403e169b6a7dcb2669" exitCode=0 Feb 18 20:30:53 crc kubenswrapper[4932]: I0218 20:30:53.981227 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s2grr" event={"ID":"088aaa53-25ca-48c3-a904-2af0f07e8c2b","Type":"ContainerDied","Data":"878fe673902cd80ce6ca092b6e629d1eddb9388f8d8bc0403e169b6a7dcb2669"} Feb 18 20:30:54 crc kubenswrapper[4932]: I0218 20:30:54.993509 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s2grr" event={"ID":"088aaa53-25ca-48c3-a904-2af0f07e8c2b","Type":"ContainerStarted","Data":"2951a6d2af3494aafb188ed0909a2d2930889940302fbfb9916042c18e76b0ac"} Feb 18 20:30:56 crc kubenswrapper[4932]: I0218 20:30:56.006488 4932 generic.go:334] "Generic (PLEG): container finished" podID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" containerID="2951a6d2af3494aafb188ed0909a2d2930889940302fbfb9916042c18e76b0ac" exitCode=0 Feb 18 20:30:56 crc kubenswrapper[4932]: I0218 20:30:56.006569 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s2grr" event={"ID":"088aaa53-25ca-48c3-a904-2af0f07e8c2b","Type":"ContainerDied","Data":"2951a6d2af3494aafb188ed0909a2d2930889940302fbfb9916042c18e76b0ac"} Feb 18 20:30:56 crc kubenswrapper[4932]: E0218 20:30:56.408696 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:30:56 crc kubenswrapper[4932]: E0218 20:30:56.409438 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:20MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9l46c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-s2grr_openshift-marketplace(088aaa53-25ca-48c3-a904-2af0f07e8c2b): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:30:56 crc kubenswrapper[4932]: E0218 20:30:56.410705 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:30:57 crc kubenswrapper[4932]: E0218 20:30:57.022544 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:30:57 crc kubenswrapper[4932]: I0218 20:30:57.606340 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:30:57 crc kubenswrapper[4932]: I0218 20:30:57.606408 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:31:01 crc kubenswrapper[4932]: E0218 20:31:01.945031 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:31:01 crc kubenswrapper[4932]: E0218 20:31:01.945608 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r77nm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pkmxd_openshift-marketplace(9c30675a-a3c0-497c-804a-42c3640846eb): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:31:01 crc kubenswrapper[4932]: E0218 20:31:01.946823 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:31:04 crc kubenswrapper[4932]: E0218 20:31:04.183910 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:31:10 crc kubenswrapper[4932]: E0218 20:31:10.948655 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:31:10 crc kubenswrapper[4932]: E0218 20:31:10.949591 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:20MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9l46c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-s2grr_openshift-marketplace(088aaa53-25ca-48c3-a904-2af0f07e8c2b): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:31:10 crc kubenswrapper[4932]: E0218 20:31:10.950906 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:31:16 crc kubenswrapper[4932]: E0218 20:31:16.181967 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:31:19 crc kubenswrapper[4932]: E0218 20:31:19.182821 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:31:24 crc kubenswrapper[4932]: E0218 20:31:24.182170 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:31:27 crc kubenswrapper[4932]: I0218 20:31:27.606113 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:31:27 crc kubenswrapper[4932]: I0218 20:31:27.606783 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:31:28 crc kubenswrapper[4932]: E0218 20:31:28.848275 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:31:28 crc kubenswrapper[4932]: E0218 20:31:28.848493 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r77nm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pkmxd_openshift-marketplace(9c30675a-a3c0-497c-804a-42c3640846eb): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:31:28 crc kubenswrapper[4932]: E0218 20:31:28.849800 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:31:34 crc kubenswrapper[4932]: E0218 20:31:34.182984 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:31:36 crc kubenswrapper[4932]: E0218 20:31:36.692602 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:31:36 crc kubenswrapper[4932]: E0218 20:31:36.694947 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:20MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9l46c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-s2grr_openshift-marketplace(088aaa53-25ca-48c3-a904-2af0f07e8c2b): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:31:36 crc kubenswrapper[4932]: E0218 20:31:36.696403 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:31:43 crc kubenswrapper[4932]: E0218 20:31:43.184910 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:31:47 crc kubenswrapper[4932]: E0218 20:31:47.182047 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:31:52 crc kubenswrapper[4932]: E0218 20:31:52.182285 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:31:56 crc kubenswrapper[4932]: E0218 20:31:56.181925 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:31:57 crc kubenswrapper[4932]: I0218 20:31:57.606792 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:31:57 crc kubenswrapper[4932]: I0218 20:31:57.607293 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:31:57 crc kubenswrapper[4932]: I0218 20:31:57.607351 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 20:31:57 crc kubenswrapper[4932]: I0218 20:31:57.608453 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"78c9f9fc09b3668d1f7c86135c8c1bd3ce72f15c084caed76c5a646c505ebcef"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:31:57 crc kubenswrapper[4932]: I0218 20:31:57.608552 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://78c9f9fc09b3668d1f7c86135c8c1bd3ce72f15c084caed76c5a646c505ebcef" gracePeriod=600 Feb 18 20:31:57 crc kubenswrapper[4932]: I0218 20:31:57.787482 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="78c9f9fc09b3668d1f7c86135c8c1bd3ce72f15c084caed76c5a646c505ebcef" exitCode=0 Feb 18 20:31:57 crc kubenswrapper[4932]: I0218 20:31:57.787623 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"78c9f9fc09b3668d1f7c86135c8c1bd3ce72f15c084caed76c5a646c505ebcef"} Feb 18 20:31:57 crc kubenswrapper[4932]: I0218 20:31:57.787983 4932 scope.go:117] "RemoveContainer" containerID="9c0c0469f6ea35df324343eddfc2c12f1f2b7d1388223cb04bec8232e76dfb72" Feb 18 20:31:58 crc kubenswrapper[4932]: I0218 20:31:58.803742 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8"} Feb 18 20:32:01 crc kubenswrapper[4932]: E0218 20:32:01.186493 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:32:07 crc kubenswrapper[4932]: E0218 20:32:07.195607 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:32:11 crc kubenswrapper[4932]: E0218 20:32:11.225245 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:32:11 crc kubenswrapper[4932]: E0218 20:32:11.226092 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r77nm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pkmxd_openshift-marketplace(9c30675a-a3c0-497c-804a-42c3640846eb): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:32:11 crc kubenswrapper[4932]: E0218 20:32:11.227414 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:32:16 crc kubenswrapper[4932]: E0218 20:32:16.182619 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:32:20 crc kubenswrapper[4932]: E0218 20:32:20.537507 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:32:20 crc kubenswrapper[4932]: E0218 20:32:20.538425 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:20MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9l46c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-s2grr_openshift-marketplace(088aaa53-25ca-48c3-a904-2af0f07e8c2b): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:32:20 crc kubenswrapper[4932]: E0218 20:32:20.539692 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:32:22 crc kubenswrapper[4932]: E0218 20:32:22.181054 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:32:30 crc kubenswrapper[4932]: E0218 20:32:30.181907 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:32:33 crc kubenswrapper[4932]: E0218 20:32:33.182752 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:32:34 crc kubenswrapper[4932]: E0218 20:32:34.184142 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:32:43 crc kubenswrapper[4932]: E0218 20:32:43.186205 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:32:44 crc kubenswrapper[4932]: E0218 20:32:44.181550 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:32:47 crc kubenswrapper[4932]: E0218 20:32:47.197057 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:32:57 crc kubenswrapper[4932]: E0218 20:32:57.199079 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:32:58 crc kubenswrapper[4932]: E0218 20:32:58.183684 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:33:02 crc kubenswrapper[4932]: E0218 20:33:02.181506 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:33:09 crc kubenswrapper[4932]: E0218 20:33:09.183688 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:33:12 crc kubenswrapper[4932]: E0218 20:33:12.181070 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:33:13 crc kubenswrapper[4932]: E0218 20:33:13.181690 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:33:24 crc kubenswrapper[4932]: E0218 20:33:24.183768 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:33:24 crc kubenswrapper[4932]: E0218 20:33:24.183768 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:33:25 crc kubenswrapper[4932]: E0218 20:33:25.182955 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:33:36 crc kubenswrapper[4932]: I0218 20:33:36.183005 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:33:36 crc kubenswrapper[4932]: E0218 20:33:36.627009 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:33:36 crc kubenswrapper[4932]: E0218 20:33:36.627196 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsc85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jhb45_openshift-marketplace(1a69dedd-7666-4739-af80-59d37eedf9b1): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:33:36 crc kubenswrapper[4932]: E0218 20:33:36.628403 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:33:39 crc kubenswrapper[4932]: E0218 20:33:39.182447 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:33:39 crc kubenswrapper[4932]: E0218 20:33:39.603012 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:33:39 crc kubenswrapper[4932]: E0218 20:33:39.603265 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r77nm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pkmxd_openshift-marketplace(9c30675a-a3c0-497c-804a-42c3640846eb): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:33:39 crc kubenswrapper[4932]: E0218 20:33:39.604556 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:33:48 crc kubenswrapper[4932]: E0218 20:33:48.182572 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:33:53 crc kubenswrapper[4932]: E0218 20:33:53.186211 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:33:55 crc kubenswrapper[4932]: E0218 20:33:55.581629 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:33:55 crc kubenswrapper[4932]: E0218 20:33:55.583070 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:20MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9l46c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-s2grr_openshift-marketplace(088aaa53-25ca-48c3-a904-2af0f07e8c2b): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:33:55 crc kubenswrapper[4932]: E0218 20:33:55.584483 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:34:02 crc kubenswrapper[4932]: E0218 20:34:02.184414 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:34:04 crc kubenswrapper[4932]: E0218 20:34:04.182461 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:34:10 crc kubenswrapper[4932]: E0218 20:34:10.180584 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:34:13 crc kubenswrapper[4932]: E0218 20:34:13.181924 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:34:18 crc kubenswrapper[4932]: E0218 20:34:18.970479 4932 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.190:43576->38.102.83.190:41227: write tcp 38.102.83.190:43576->38.102.83.190:41227: write: broken pipe Feb 18 20:34:19 crc kubenswrapper[4932]: E0218 20:34:19.182592 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:34:23 crc kubenswrapper[4932]: E0218 20:34:23.183115 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:34:24 crc kubenswrapper[4932]: E0218 20:34:24.181794 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:34:27 crc kubenswrapper[4932]: I0218 20:34:27.606624 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:34:27 crc kubenswrapper[4932]: I0218 20:34:27.607401 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:34:34 crc kubenswrapper[4932]: E0218 20:34:34.184603 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:34:35 crc kubenswrapper[4932]: E0218 20:34:35.181791 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:34:38 crc kubenswrapper[4932]: E0218 20:34:38.183514 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:34:46 crc kubenswrapper[4932]: E0218 20:34:46.183608 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:34:47 crc kubenswrapper[4932]: E0218 20:34:47.195094 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:34:50 crc kubenswrapper[4932]: E0218 20:34:50.181767 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:34:57 crc kubenswrapper[4932]: I0218 20:34:57.606705 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:34:57 crc kubenswrapper[4932]: I0218 20:34:57.607552 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:34:58 crc kubenswrapper[4932]: E0218 20:34:58.182453 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:34:59 crc kubenswrapper[4932]: E0218 20:34:59.183688 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:35:03 crc kubenswrapper[4932]: E0218 20:35:03.184305 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:35:04 crc kubenswrapper[4932]: I0218 20:35:04.001874 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8slcg"] Feb 18 20:35:04 crc kubenswrapper[4932]: I0218 20:35:04.005930 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8slcg" Feb 18 20:35:04 crc kubenswrapper[4932]: I0218 20:35:04.045903 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8slcg"] Feb 18 20:35:04 crc kubenswrapper[4932]: I0218 20:35:04.065271 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57dbf2a4-5676-4291-911d-00038d3c7c75-utilities\") pod \"community-operators-8slcg\" (UID: \"57dbf2a4-5676-4291-911d-00038d3c7c75\") " pod="openshift-marketplace/community-operators-8slcg" Feb 18 20:35:04 crc kubenswrapper[4932]: I0218 20:35:04.065349 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz4sh\" (UniqueName: \"kubernetes.io/projected/57dbf2a4-5676-4291-911d-00038d3c7c75-kube-api-access-xz4sh\") pod \"community-operators-8slcg\" (UID: \"57dbf2a4-5676-4291-911d-00038d3c7c75\") " pod="openshift-marketplace/community-operators-8slcg" Feb 18 20:35:04 crc kubenswrapper[4932]: I0218 20:35:04.065407 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57dbf2a4-5676-4291-911d-00038d3c7c75-catalog-content\") pod \"community-operators-8slcg\" (UID: \"57dbf2a4-5676-4291-911d-00038d3c7c75\") " pod="openshift-marketplace/community-operators-8slcg" Feb 18 20:35:04 crc kubenswrapper[4932]: I0218 20:35:04.167718 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57dbf2a4-5676-4291-911d-00038d3c7c75-utilities\") pod \"community-operators-8slcg\" (UID: \"57dbf2a4-5676-4291-911d-00038d3c7c75\") " pod="openshift-marketplace/community-operators-8slcg" Feb 18 20:35:04 crc kubenswrapper[4932]: I0218 20:35:04.167799 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz4sh\" (UniqueName: \"kubernetes.io/projected/57dbf2a4-5676-4291-911d-00038d3c7c75-kube-api-access-xz4sh\") pod \"community-operators-8slcg\" (UID: \"57dbf2a4-5676-4291-911d-00038d3c7c75\") " pod="openshift-marketplace/community-operators-8slcg" Feb 18 20:35:04 crc kubenswrapper[4932]: I0218 20:35:04.167848 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57dbf2a4-5676-4291-911d-00038d3c7c75-catalog-content\") pod \"community-operators-8slcg\" (UID: \"57dbf2a4-5676-4291-911d-00038d3c7c75\") " pod="openshift-marketplace/community-operators-8slcg" Feb 18 20:35:04 crc kubenswrapper[4932]: I0218 20:35:04.168496 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57dbf2a4-5676-4291-911d-00038d3c7c75-utilities\") pod \"community-operators-8slcg\" (UID: \"57dbf2a4-5676-4291-911d-00038d3c7c75\") " pod="openshift-marketplace/community-operators-8slcg" Feb 18 20:35:04 crc kubenswrapper[4932]: I0218 20:35:04.168599 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57dbf2a4-5676-4291-911d-00038d3c7c75-catalog-content\") pod \"community-operators-8slcg\" (UID: \"57dbf2a4-5676-4291-911d-00038d3c7c75\") " pod="openshift-marketplace/community-operators-8slcg" Feb 18 20:35:04 crc kubenswrapper[4932]: I0218 20:35:04.212209 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz4sh\" (UniqueName: \"kubernetes.io/projected/57dbf2a4-5676-4291-911d-00038d3c7c75-kube-api-access-xz4sh\") pod \"community-operators-8slcg\" (UID: \"57dbf2a4-5676-4291-911d-00038d3c7c75\") " pod="openshift-marketplace/community-operators-8slcg" Feb 18 20:35:04 crc kubenswrapper[4932]: I0218 20:35:04.377242 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8slcg" Feb 18 20:35:04 crc kubenswrapper[4932]: I0218 20:35:04.923091 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8slcg"] Feb 18 20:35:05 crc kubenswrapper[4932]: I0218 20:35:05.023344 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8slcg" event={"ID":"57dbf2a4-5676-4291-911d-00038d3c7c75","Type":"ContainerStarted","Data":"c7f2e7fbb70d9869728b6339d84005be8b57a54314e232383f2b03d5551f1998"} Feb 18 20:35:06 crc kubenswrapper[4932]: I0218 20:35:06.035454 4932 generic.go:334] "Generic (PLEG): container finished" podID="57dbf2a4-5676-4291-911d-00038d3c7c75" containerID="e37184b7e67125463f4eb5eda4953c7cfea3d3bd4c0efeedfc7b40d067b85b17" exitCode=0 Feb 18 20:35:06 crc kubenswrapper[4932]: I0218 20:35:06.035503 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8slcg" event={"ID":"57dbf2a4-5676-4291-911d-00038d3c7c75","Type":"ContainerDied","Data":"e37184b7e67125463f4eb5eda4953c7cfea3d3bd4c0efeedfc7b40d067b85b17"} Feb 18 20:35:07 crc kubenswrapper[4932]: E0218 20:35:07.751905 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:35:07 crc kubenswrapper[4932]: E0218 20:35:07.752550 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xz4sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8slcg_openshift-marketplace(57dbf2a4-5676-4291-911d-00038d3c7c75): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:35:07 crc kubenswrapper[4932]: E0218 20:35:07.754044 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:35:08 crc kubenswrapper[4932]: E0218 20:35:08.061380 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:35:10 crc kubenswrapper[4932]: E0218 20:35:10.181716 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:35:11 crc kubenswrapper[4932]: E0218 20:35:11.182309 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:35:14 crc kubenswrapper[4932]: E0218 20:35:14.183630 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:35:19 crc kubenswrapper[4932]: E0218 20:35:19.736780 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:35:19 crc kubenswrapper[4932]: E0218 20:35:19.737987 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xz4sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8slcg_openshift-marketplace(57dbf2a4-5676-4291-911d-00038d3c7c75): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:35:19 crc kubenswrapper[4932]: E0218 20:35:19.739305 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:35:22 crc kubenswrapper[4932]: E0218 20:35:22.183308 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:35:23 crc kubenswrapper[4932]: E0218 20:35:23.182573 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:35:27 crc kubenswrapper[4932]: I0218 20:35:27.606114 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:35:27 crc kubenswrapper[4932]: I0218 20:35:27.607808 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:35:27 crc kubenswrapper[4932]: I0218 20:35:27.607962 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 20:35:27 crc kubenswrapper[4932]: I0218 20:35:27.608845 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:35:27 crc kubenswrapper[4932]: I0218 20:35:27.609026 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" gracePeriod=600 Feb 18 20:35:27 crc kubenswrapper[4932]: E0218 20:35:27.745625 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:35:28 crc kubenswrapper[4932]: E0218 20:35:28.181063 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:35:28 crc kubenswrapper[4932]: I0218 20:35:28.310790 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" exitCode=0 Feb 18 20:35:28 crc kubenswrapper[4932]: I0218 20:35:28.310862 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8"} Feb 18 20:35:28 crc kubenswrapper[4932]: I0218 20:35:28.310919 4932 scope.go:117] "RemoveContainer" containerID="78c9f9fc09b3668d1f7c86135c8c1bd3ce72f15c084caed76c5a646c505ebcef" Feb 18 20:35:28 crc kubenswrapper[4932]: I0218 20:35:28.312527 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:35:28 crc kubenswrapper[4932]: E0218 20:35:28.316709 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:35:31 crc kubenswrapper[4932]: E0218 20:35:31.183756 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:35:35 crc kubenswrapper[4932]: E0218 20:35:35.180816 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:35:36 crc kubenswrapper[4932]: E0218 20:35:36.183621 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:35:41 crc kubenswrapper[4932]: E0218 20:35:41.183671 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:35:44 crc kubenswrapper[4932]: I0218 20:35:44.180132 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:35:44 crc kubenswrapper[4932]: E0218 20:35:44.180821 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:35:44 crc kubenswrapper[4932]: E0218 20:35:44.714791 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:35:44 crc kubenswrapper[4932]: E0218 20:35:44.715027 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xz4sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8slcg_openshift-marketplace(57dbf2a4-5676-4291-911d-00038d3c7c75): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:35:44 crc kubenswrapper[4932]: E0218 20:35:44.716294 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:35:46 crc kubenswrapper[4932]: E0218 20:35:46.183144 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:35:49 crc kubenswrapper[4932]: E0218 20:35:49.183031 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:35:55 crc kubenswrapper[4932]: E0218 20:35:55.193400 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:35:56 crc kubenswrapper[4932]: E0218 20:35:56.182260 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:35:57 crc kubenswrapper[4932]: I0218 20:35:57.188703 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:35:57 crc kubenswrapper[4932]: E0218 20:35:57.189769 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:35:59 crc kubenswrapper[4932]: E0218 20:35:59.182564 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:36:01 crc kubenswrapper[4932]: E0218 20:36:01.182542 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:36:07 crc kubenswrapper[4932]: E0218 20:36:07.197996 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:36:09 crc kubenswrapper[4932]: E0218 20:36:09.183099 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:36:11 crc kubenswrapper[4932]: I0218 20:36:11.179979 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:36:11 crc kubenswrapper[4932]: E0218 20:36:11.180884 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:36:14 crc kubenswrapper[4932]: E0218 20:36:14.190777 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:36:16 crc kubenswrapper[4932]: E0218 20:36:16.181370 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:36:21 crc kubenswrapper[4932]: E0218 20:36:21.183217 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:36:22 crc kubenswrapper[4932]: E0218 20:36:22.181947 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:36:23 crc kubenswrapper[4932]: I0218 20:36:23.179115 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:36:23 crc kubenswrapper[4932]: E0218 20:36:23.179479 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:36:26 crc kubenswrapper[4932]: E0218 20:36:26.323236 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:36:26 crc kubenswrapper[4932]: E0218 20:36:26.324044 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r77nm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pkmxd_openshift-marketplace(9c30675a-a3c0-497c-804a-42c3640846eb): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:36:26 crc kubenswrapper[4932]: E0218 20:36:26.325280 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:36:30 crc kubenswrapper[4932]: E0218 20:36:30.180972 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:36:33 crc kubenswrapper[4932]: E0218 20:36:33.112583 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:36:33 crc kubenswrapper[4932]: E0218 20:36:33.113403 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xz4sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8slcg_openshift-marketplace(57dbf2a4-5676-4291-911d-00038d3c7c75): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:36:33 crc kubenswrapper[4932]: E0218 20:36:33.115103 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:36:35 crc kubenswrapper[4932]: E0218 20:36:35.182458 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:36:37 crc kubenswrapper[4932]: I0218 20:36:37.186043 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:36:37 crc kubenswrapper[4932]: E0218 20:36:37.187071 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:36:42 crc kubenswrapper[4932]: E0218 20:36:42.181553 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:36:42 crc kubenswrapper[4932]: E0218 20:36:42.182047 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:36:48 crc kubenswrapper[4932]: E0218 20:36:48.184876 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:36:49 crc kubenswrapper[4932]: I0218 20:36:49.180627 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:36:49 crc kubenswrapper[4932]: E0218 20:36:49.181286 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:36:50 crc kubenswrapper[4932]: E0218 20:36:50.764410 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:36:50 crc kubenswrapper[4932]: E0218 20:36:50.764944 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:20MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9l46c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-s2grr_openshift-marketplace(088aaa53-25ca-48c3-a904-2af0f07e8c2b): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:36:50 crc kubenswrapper[4932]: E0218 20:36:50.766759 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:36:55 crc kubenswrapper[4932]: E0218 20:36:55.186386 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:36:57 crc kubenswrapper[4932]: E0218 20:36:57.194332 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:37:00 crc kubenswrapper[4932]: I0218 20:37:00.180656 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:37:00 crc kubenswrapper[4932]: E0218 20:37:00.181291 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:37:04 crc kubenswrapper[4932]: E0218 20:37:04.186600 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:37:04 crc kubenswrapper[4932]: E0218 20:37:04.186618 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:37:06 crc kubenswrapper[4932]: E0218 20:37:06.180669 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:37:12 crc kubenswrapper[4932]: E0218 20:37:12.182445 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:37:15 crc kubenswrapper[4932]: I0218 20:37:15.194727 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:37:15 crc kubenswrapper[4932]: E0218 20:37:15.195637 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:37:17 crc kubenswrapper[4932]: E0218 20:37:17.190943 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:37:17 crc kubenswrapper[4932]: E0218 20:37:17.191509 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:37:17 crc kubenswrapper[4932]: E0218 20:37:17.190836 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:37:23 crc kubenswrapper[4932]: E0218 20:37:23.184824 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:37:27 crc kubenswrapper[4932]: I0218 20:37:27.194424 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:37:27 crc kubenswrapper[4932]: E0218 20:37:27.195770 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:37:28 crc kubenswrapper[4932]: E0218 20:37:28.183417 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:37:30 crc kubenswrapper[4932]: E0218 20:37:30.181916 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:37:31 crc kubenswrapper[4932]: E0218 20:37:31.183784 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:37:35 crc kubenswrapper[4932]: E0218 20:37:35.184279 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:37:39 crc kubenswrapper[4932]: I0218 20:37:39.180422 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:37:39 crc kubenswrapper[4932]: E0218 20:37:39.181451 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:37:42 crc kubenswrapper[4932]: E0218 20:37:42.183207 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:37:44 crc kubenswrapper[4932]: E0218 20:37:44.182271 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:37:45 crc kubenswrapper[4932]: E0218 20:37:45.181426 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:37:48 crc kubenswrapper[4932]: E0218 20:37:48.183340 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:37:50 crc kubenswrapper[4932]: I0218 20:37:50.179432 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:37:50 crc kubenswrapper[4932]: E0218 20:37:50.180086 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:37:53 crc kubenswrapper[4932]: E0218 20:37:53.183223 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:37:56 crc kubenswrapper[4932]: E0218 20:37:56.183820 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:37:57 crc kubenswrapper[4932]: E0218 20:37:57.363295 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:37:57 crc kubenswrapper[4932]: E0218 20:37:57.363926 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xz4sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8slcg_openshift-marketplace(57dbf2a4-5676-4291-911d-00038d3c7c75): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:37:57 crc kubenswrapper[4932]: E0218 20:37:57.365163 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:38:02 crc kubenswrapper[4932]: E0218 20:38:02.181360 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:38:05 crc kubenswrapper[4932]: I0218 20:38:05.178864 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:38:05 crc kubenswrapper[4932]: E0218 20:38:05.179594 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:38:08 crc kubenswrapper[4932]: E0218 20:38:08.182902 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:38:11 crc kubenswrapper[4932]: E0218 20:38:11.183299 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:38:13 crc kubenswrapper[4932]: E0218 20:38:13.181116 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:38:16 crc kubenswrapper[4932]: E0218 20:38:16.182157 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:38:19 crc kubenswrapper[4932]: I0218 20:38:19.180209 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:38:19 crc kubenswrapper[4932]: E0218 20:38:19.181229 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:38:22 crc kubenswrapper[4932]: E0218 20:38:22.183134 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:38:25 crc kubenswrapper[4932]: E0218 20:38:25.182393 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:38:26 crc kubenswrapper[4932]: E0218 20:38:26.181542 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:38:29 crc kubenswrapper[4932]: E0218 20:38:29.183269 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:38:31 crc kubenswrapper[4932]: I0218 20:38:31.180440 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:38:31 crc kubenswrapper[4932]: E0218 20:38:31.182838 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:38:35 crc kubenswrapper[4932]: E0218 20:38:35.181955 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:38:40 crc kubenswrapper[4932]: E0218 20:38:40.181745 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:38:40 crc kubenswrapper[4932]: I0218 20:38:40.182527 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:38:41 crc kubenswrapper[4932]: E0218 20:38:41.181082 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:38:43 crc kubenswrapper[4932]: E0218 20:38:43.248787 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:38:43 crc kubenswrapper[4932]: E0218 20:38:43.249049 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsc85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jhb45_openshift-marketplace(1a69dedd-7666-4739-af80-59d37eedf9b1): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:38:43 crc kubenswrapper[4932]: E0218 20:38:43.250202 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:38:46 crc kubenswrapper[4932]: I0218 20:38:46.179407 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:38:46 crc kubenswrapper[4932]: E0218 20:38:46.180254 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:38:48 crc kubenswrapper[4932]: E0218 20:38:48.182827 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:38:54 crc kubenswrapper[4932]: E0218 20:38:54.186842 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:38:55 crc kubenswrapper[4932]: E0218 20:38:55.182023 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:38:58 crc kubenswrapper[4932]: E0218 20:38:58.182364 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:38:59 crc kubenswrapper[4932]: I0218 20:38:59.179356 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:38:59 crc kubenswrapper[4932]: E0218 20:38:59.180055 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:39:01 crc kubenswrapper[4932]: E0218 20:39:01.183615 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:39:07 crc kubenswrapper[4932]: E0218 20:39:07.190782 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:39:07 crc kubenswrapper[4932]: E0218 20:39:07.190871 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:39:12 crc kubenswrapper[4932]: E0218 20:39:12.182581 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:39:14 crc kubenswrapper[4932]: I0218 20:39:14.179616 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:39:14 crc kubenswrapper[4932]: E0218 20:39:14.180574 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:39:16 crc kubenswrapper[4932]: E0218 20:39:16.182479 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:39:18 crc kubenswrapper[4932]: E0218 20:39:18.195111 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:39:18 crc kubenswrapper[4932]: E0218 20:39:18.214043 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:39:25 crc kubenswrapper[4932]: E0218 20:39:25.182580 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:39:27 crc kubenswrapper[4932]: I0218 20:39:27.196896 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:39:27 crc kubenswrapper[4932]: E0218 20:39:27.198061 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:39:28 crc kubenswrapper[4932]: E0218 20:39:28.182132 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:39:32 crc kubenswrapper[4932]: E0218 20:39:32.182024 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:39:33 crc kubenswrapper[4932]: E0218 20:39:33.182667 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:39:39 crc kubenswrapper[4932]: I0218 20:39:39.180720 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:39:39 crc kubenswrapper[4932]: E0218 20:39:39.183102 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:39:40 crc kubenswrapper[4932]: E0218 20:39:40.184248 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:39:40 crc kubenswrapper[4932]: E0218 20:39:40.184418 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:39:45 crc kubenswrapper[4932]: E0218 20:39:45.182717 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:39:45 crc kubenswrapper[4932]: E0218 20:39:45.183465 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:39:51 crc kubenswrapper[4932]: E0218 20:39:51.184708 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:39:52 crc kubenswrapper[4932]: E0218 20:39:52.183035 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:39:54 crc kubenswrapper[4932]: I0218 20:39:54.180825 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:39:54 crc kubenswrapper[4932]: E0218 20:39:54.181802 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:39:56 crc kubenswrapper[4932]: E0218 20:39:56.183689 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:40:00 crc kubenswrapper[4932]: E0218 20:40:00.182492 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:40:05 crc kubenswrapper[4932]: E0218 20:40:05.436507 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:40:06 crc kubenswrapper[4932]: E0218 20:40:06.183511 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:40:07 crc kubenswrapper[4932]: I0218 20:40:07.198906 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:40:07 crc kubenswrapper[4932]: E0218 20:40:07.199798 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:40:10 crc kubenswrapper[4932]: E0218 20:40:10.181790 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:40:13 crc kubenswrapper[4932]: E0218 20:40:13.542496 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:40:19 crc kubenswrapper[4932]: E0218 20:40:19.183424 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:40:20 crc kubenswrapper[4932]: I0218 20:40:20.180222 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:40:20 crc kubenswrapper[4932]: E0218 20:40:20.180545 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:40:21 crc kubenswrapper[4932]: E0218 20:40:21.183499 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:40:23 crc kubenswrapper[4932]: E0218 20:40:23.182751 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:40:27 crc kubenswrapper[4932]: E0218 20:40:27.199849 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:40:34 crc kubenswrapper[4932]: I0218 20:40:34.182213 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:40:34 crc kubenswrapper[4932]: E0218 20:40:34.186450 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:40:34 crc kubenswrapper[4932]: I0218 20:40:34.821682 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"8907f611fdc4bb018a8a5f1574c9b0677d04bcde1f4d724c23d8ea1124f73d8b"} Feb 18 20:40:35 crc kubenswrapper[4932]: E0218 20:40:35.181576 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:40:35 crc kubenswrapper[4932]: E0218 20:40:35.181808 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:40:44 crc kubenswrapper[4932]: E0218 20:40:44.607393 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:40:44 crc kubenswrapper[4932]: E0218 20:40:44.608081 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xz4sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8slcg_openshift-marketplace(57dbf2a4-5676-4291-911d-00038d3c7c75): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:40:44 crc kubenswrapper[4932]: E0218 20:40:44.610242 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:40:47 crc kubenswrapper[4932]: E0218 20:40:47.204674 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:40:48 crc kubenswrapper[4932]: E0218 20:40:48.183267 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:40:49 crc kubenswrapper[4932]: E0218 20:40:49.184130 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:40:56 crc kubenswrapper[4932]: E0218 20:40:56.182763 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:41:01 crc kubenswrapper[4932]: E0218 20:41:01.183247 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:41:01 crc kubenswrapper[4932]: E0218 20:41:01.183323 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:41:01 crc kubenswrapper[4932]: E0218 20:41:01.183544 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:41:11 crc kubenswrapper[4932]: E0218 20:41:11.183107 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:41:12 crc kubenswrapper[4932]: E0218 20:41:12.180672 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:41:14 crc kubenswrapper[4932]: E0218 20:41:14.181460 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:41:15 crc kubenswrapper[4932]: E0218 20:41:15.182407 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:41:24 crc kubenswrapper[4932]: E0218 20:41:24.182612 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:41:26 crc kubenswrapper[4932]: E0218 20:41:26.182389 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:41:26 crc kubenswrapper[4932]: E0218 20:41:26.182399 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:41:27 crc kubenswrapper[4932]: E0218 20:41:27.195824 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:41:38 crc kubenswrapper[4932]: E0218 20:41:38.184568 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:41:39 crc kubenswrapper[4932]: E0218 20:41:39.183871 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:41:39 crc kubenswrapper[4932]: E0218 20:41:39.184268 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:41:40 crc kubenswrapper[4932]: E0218 20:41:40.621642 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:41:40 crc kubenswrapper[4932]: E0218 20:41:40.622278 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r77nm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pkmxd_openshift-marketplace(9c30675a-a3c0-497c-804a-42c3640846eb): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:41:40 crc kubenswrapper[4932]: E0218 20:41:40.623562 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:41:50 crc kubenswrapper[4932]: E0218 20:41:50.183646 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:41:51 crc kubenswrapper[4932]: E0218 20:41:51.182087 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:41:51 crc kubenswrapper[4932]: E0218 20:41:51.182926 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:41:54 crc kubenswrapper[4932]: E0218 20:41:54.183441 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:42:02 crc kubenswrapper[4932]: E0218 20:42:02.183945 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:42:02 crc kubenswrapper[4932]: E0218 20:42:02.945021 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:42:02 crc kubenswrapper[4932]: E0218 20:42:02.945550 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:20MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9l46c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-s2grr_openshift-marketplace(088aaa53-25ca-48c3-a904-2af0f07e8c2b): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:42:02 crc kubenswrapper[4932]: E0218 20:42:02.946793 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:42:04 crc kubenswrapper[4932]: E0218 20:42:04.183933 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:42:08 crc kubenswrapper[4932]: E0218 20:42:08.184990 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:42:15 crc kubenswrapper[4932]: E0218 20:42:15.183996 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:42:15 crc kubenswrapper[4932]: E0218 20:42:15.184613 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:42:18 crc kubenswrapper[4932]: E0218 20:42:18.181607 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:42:19 crc kubenswrapper[4932]: E0218 20:42:19.182627 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:42:26 crc kubenswrapper[4932]: E0218 20:42:26.182675 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:42:28 crc kubenswrapper[4932]: E0218 20:42:28.182093 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:42:30 crc kubenswrapper[4932]: E0218 20:42:30.182249 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:42:31 crc kubenswrapper[4932]: E0218 20:42:31.182918 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:42:40 crc kubenswrapper[4932]: E0218 20:42:40.181697 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:42:40 crc kubenswrapper[4932]: E0218 20:42:40.181767 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:42:41 crc kubenswrapper[4932]: E0218 20:42:41.182439 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:42:44 crc kubenswrapper[4932]: E0218 20:42:44.182253 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:42:52 crc kubenswrapper[4932]: E0218 20:42:52.185129 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:42:53 crc kubenswrapper[4932]: E0218 20:42:53.183499 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:42:56 crc kubenswrapper[4932]: E0218 20:42:56.183133 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:42:57 crc kubenswrapper[4932]: I0218 20:42:57.606332 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:42:57 crc kubenswrapper[4932]: I0218 20:42:57.606779 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:42:59 crc kubenswrapper[4932]: E0218 20:42:59.181929 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:43:04 crc kubenswrapper[4932]: E0218 20:43:04.189236 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:43:07 crc kubenswrapper[4932]: E0218 20:43:07.196824 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:43:10 crc kubenswrapper[4932]: E0218 20:43:10.182162 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:43:12 crc kubenswrapper[4932]: E0218 20:43:12.181635 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:43:16 crc kubenswrapper[4932]: E0218 20:43:16.182654 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:43:18 crc kubenswrapper[4932]: E0218 20:43:18.181645 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:43:24 crc kubenswrapper[4932]: E0218 20:43:24.183744 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:43:26 crc kubenswrapper[4932]: E0218 20:43:26.182528 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:43:27 crc kubenswrapper[4932]: I0218 20:43:27.606336 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:43:27 crc kubenswrapper[4932]: I0218 20:43:27.606618 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:43:29 crc kubenswrapper[4932]: E0218 20:43:29.182508 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:43:31 crc kubenswrapper[4932]: E0218 20:43:31.182985 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:43:35 crc kubenswrapper[4932]: E0218 20:43:35.182682 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:43:37 crc kubenswrapper[4932]: E0218 20:43:37.182963 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:43:43 crc kubenswrapper[4932]: E0218 20:43:43.183361 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:43:44 crc kubenswrapper[4932]: I0218 20:43:44.185825 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:43:46 crc kubenswrapper[4932]: E0218 20:43:46.680094 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:43:46 crc kubenswrapper[4932]: E0218 20:43:46.681153 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsc85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jhb45_openshift-marketplace(1a69dedd-7666-4739-af80-59d37eedf9b1): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:43:46 crc kubenswrapper[4932]: E0218 20:43:46.682558 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:43:48 crc kubenswrapper[4932]: E0218 20:43:48.181793 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:43:50 crc kubenswrapper[4932]: E0218 20:43:50.181691 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:43:54 crc kubenswrapper[4932]: E0218 20:43:54.183253 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:43:57 crc kubenswrapper[4932]: I0218 20:43:57.606124 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:43:57 crc kubenswrapper[4932]: I0218 20:43:57.606807 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:43:57 crc kubenswrapper[4932]: I0218 20:43:57.606868 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 20:43:57 crc kubenswrapper[4932]: I0218 20:43:57.608071 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8907f611fdc4bb018a8a5f1574c9b0677d04bcde1f4d724c23d8ea1124f73d8b"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:43:57 crc kubenswrapper[4932]: I0218 20:43:57.608204 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://8907f611fdc4bb018a8a5f1574c9b0677d04bcde1f4d724c23d8ea1124f73d8b" gracePeriod=600 Feb 18 20:43:58 crc kubenswrapper[4932]: I0218 20:43:58.349862 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="8907f611fdc4bb018a8a5f1574c9b0677d04bcde1f4d724c23d8ea1124f73d8b" exitCode=0 Feb 18 20:43:58 crc kubenswrapper[4932]: I0218 20:43:58.349991 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"8907f611fdc4bb018a8a5f1574c9b0677d04bcde1f4d724c23d8ea1124f73d8b"} Feb 18 20:43:58 crc kubenswrapper[4932]: I0218 20:43:58.350438 4932 scope.go:117] "RemoveContainer" containerID="e40e8b6b75bbd4622c3ef163d264c49b29ba62900e9a61f8bc3dfdbe3c6f53d8" Feb 18 20:43:59 crc kubenswrapper[4932]: E0218 20:43:59.182860 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:43:59 crc kubenswrapper[4932]: I0218 20:43:59.359500 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e"} Feb 18 20:44:01 crc kubenswrapper[4932]: E0218 20:44:01.198328 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:44:02 crc kubenswrapper[4932]: E0218 20:44:02.181519 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:44:08 crc kubenswrapper[4932]: E0218 20:44:08.182571 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:44:13 crc kubenswrapper[4932]: E0218 20:44:13.185698 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:44:13 crc kubenswrapper[4932]: E0218 20:44:13.186041 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:44:14 crc kubenswrapper[4932]: E0218 20:44:14.183701 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:44:20 crc kubenswrapper[4932]: E0218 20:44:20.182748 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:44:26 crc kubenswrapper[4932]: E0218 20:44:26.183286 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:44:28 crc kubenswrapper[4932]: E0218 20:44:28.184213 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:44:28 crc kubenswrapper[4932]: E0218 20:44:28.184263 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:44:35 crc kubenswrapper[4932]: E0218 20:44:35.187571 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:44:37 crc kubenswrapper[4932]: E0218 20:44:37.214814 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:44:40 crc kubenswrapper[4932]: E0218 20:44:40.181315 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:44:43 crc kubenswrapper[4932]: E0218 20:44:43.183842 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:44:47 crc kubenswrapper[4932]: E0218 20:44:47.191985 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:44:50 crc kubenswrapper[4932]: E0218 20:44:50.188395 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:44:55 crc kubenswrapper[4932]: E0218 20:44:55.183196 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:44:56 crc kubenswrapper[4932]: E0218 20:44:56.609248 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:45:00 crc kubenswrapper[4932]: I0218 20:45:00.209504 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb"] Feb 18 20:45:00 crc kubenswrapper[4932]: I0218 20:45:00.211689 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb" Feb 18 20:45:00 crc kubenswrapper[4932]: I0218 20:45:00.214413 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 20:45:00 crc kubenswrapper[4932]: I0218 20:45:00.224706 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb"] Feb 18 20:45:00 crc kubenswrapper[4932]: I0218 20:45:00.226884 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 20:45:00 crc kubenswrapper[4932]: I0218 20:45:00.271338 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6gnj\" (UniqueName: \"kubernetes.io/projected/3d2e2003-21f3-440a-85dc-1b34c00c6199-kube-api-access-d6gnj\") pod \"collect-profiles-29524125-q6ngb\" (UID: \"3d2e2003-21f3-440a-85dc-1b34c00c6199\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb" Feb 18 20:45:00 crc kubenswrapper[4932]: I0218 20:45:00.271412 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d2e2003-21f3-440a-85dc-1b34c00c6199-config-volume\") pod \"collect-profiles-29524125-q6ngb\" (UID: \"3d2e2003-21f3-440a-85dc-1b34c00c6199\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb" Feb 18 20:45:00 crc kubenswrapper[4932]: I0218 20:45:00.271461 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d2e2003-21f3-440a-85dc-1b34c00c6199-secret-volume\") pod \"collect-profiles-29524125-q6ngb\" (UID: \"3d2e2003-21f3-440a-85dc-1b34c00c6199\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb" Feb 18 20:45:00 crc kubenswrapper[4932]: I0218 20:45:00.374502 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6gnj\" (UniqueName: \"kubernetes.io/projected/3d2e2003-21f3-440a-85dc-1b34c00c6199-kube-api-access-d6gnj\") pod \"collect-profiles-29524125-q6ngb\" (UID: \"3d2e2003-21f3-440a-85dc-1b34c00c6199\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb" Feb 18 20:45:00 crc kubenswrapper[4932]: I0218 20:45:00.374591 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d2e2003-21f3-440a-85dc-1b34c00c6199-config-volume\") pod \"collect-profiles-29524125-q6ngb\" (UID: \"3d2e2003-21f3-440a-85dc-1b34c00c6199\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb" Feb 18 20:45:00 crc kubenswrapper[4932]: I0218 20:45:00.374646 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d2e2003-21f3-440a-85dc-1b34c00c6199-secret-volume\") pod \"collect-profiles-29524125-q6ngb\" (UID: \"3d2e2003-21f3-440a-85dc-1b34c00c6199\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb" Feb 18 20:45:00 crc kubenswrapper[4932]: I0218 20:45:00.375878 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d2e2003-21f3-440a-85dc-1b34c00c6199-config-volume\") pod \"collect-profiles-29524125-q6ngb\" (UID: \"3d2e2003-21f3-440a-85dc-1b34c00c6199\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb" Feb 18 20:45:00 crc kubenswrapper[4932]: I0218 20:45:00.391761 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d2e2003-21f3-440a-85dc-1b34c00c6199-secret-volume\") pod \"collect-profiles-29524125-q6ngb\" (UID: \"3d2e2003-21f3-440a-85dc-1b34c00c6199\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb" Feb 18 20:45:00 crc kubenswrapper[4932]: I0218 20:45:00.392192 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6gnj\" (UniqueName: \"kubernetes.io/projected/3d2e2003-21f3-440a-85dc-1b34c00c6199-kube-api-access-d6gnj\") pod \"collect-profiles-29524125-q6ngb\" (UID: \"3d2e2003-21f3-440a-85dc-1b34c00c6199\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb" Feb 18 20:45:00 crc kubenswrapper[4932]: I0218 20:45:00.542404 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb" Feb 18 20:45:01 crc kubenswrapper[4932]: I0218 20:45:01.085103 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb"] Feb 18 20:45:01 crc kubenswrapper[4932]: I0218 20:45:01.691502 4932 generic.go:334] "Generic (PLEG): container finished" podID="3d2e2003-21f3-440a-85dc-1b34c00c6199" containerID="df58ee848cab5e8c5456dcfe68de60c64126c02fb04b91974217490a862cd781" exitCode=0 Feb 18 20:45:01 crc kubenswrapper[4932]: I0218 20:45:01.691557 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb" event={"ID":"3d2e2003-21f3-440a-85dc-1b34c00c6199","Type":"ContainerDied","Data":"df58ee848cab5e8c5456dcfe68de60c64126c02fb04b91974217490a862cd781"} Feb 18 20:45:01 crc kubenswrapper[4932]: I0218 20:45:01.691920 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb" event={"ID":"3d2e2003-21f3-440a-85dc-1b34c00c6199","Type":"ContainerStarted","Data":"3a8a2b0c9977a7299a21c42c295c178472fc2536dbf7ac6ada2076a256f4ee05"} Feb 18 20:45:02 crc kubenswrapper[4932]: E0218 20:45:02.181492 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:45:03 crc kubenswrapper[4932]: I0218 20:45:03.119950 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb" Feb 18 20:45:03 crc kubenswrapper[4932]: I0218 20:45:03.247557 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d2e2003-21f3-440a-85dc-1b34c00c6199-config-volume\") pod \"3d2e2003-21f3-440a-85dc-1b34c00c6199\" (UID: \"3d2e2003-21f3-440a-85dc-1b34c00c6199\") " Feb 18 20:45:03 crc kubenswrapper[4932]: I0218 20:45:03.248034 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d2e2003-21f3-440a-85dc-1b34c00c6199-secret-volume\") pod \"3d2e2003-21f3-440a-85dc-1b34c00c6199\" (UID: \"3d2e2003-21f3-440a-85dc-1b34c00c6199\") " Feb 18 20:45:03 crc kubenswrapper[4932]: I0218 20:45:03.248158 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6gnj\" (UniqueName: \"kubernetes.io/projected/3d2e2003-21f3-440a-85dc-1b34c00c6199-kube-api-access-d6gnj\") pod \"3d2e2003-21f3-440a-85dc-1b34c00c6199\" (UID: \"3d2e2003-21f3-440a-85dc-1b34c00c6199\") " Feb 18 20:45:03 crc kubenswrapper[4932]: I0218 20:45:03.248766 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d2e2003-21f3-440a-85dc-1b34c00c6199-config-volume" (OuterVolumeSpecName: "config-volume") pod "3d2e2003-21f3-440a-85dc-1b34c00c6199" (UID: "3d2e2003-21f3-440a-85dc-1b34c00c6199"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:45:03 crc kubenswrapper[4932]: I0218 20:45:03.255161 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d2e2003-21f3-440a-85dc-1b34c00c6199-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3d2e2003-21f3-440a-85dc-1b34c00c6199" (UID: "3d2e2003-21f3-440a-85dc-1b34c00c6199"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:45:03 crc kubenswrapper[4932]: I0218 20:45:03.255685 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d2e2003-21f3-440a-85dc-1b34c00c6199-kube-api-access-d6gnj" (OuterVolumeSpecName: "kube-api-access-d6gnj") pod "3d2e2003-21f3-440a-85dc-1b34c00c6199" (UID: "3d2e2003-21f3-440a-85dc-1b34c00c6199"). InnerVolumeSpecName "kube-api-access-d6gnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:45:03 crc kubenswrapper[4932]: I0218 20:45:03.351969 4932 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d2e2003-21f3-440a-85dc-1b34c00c6199-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:45:03 crc kubenswrapper[4932]: I0218 20:45:03.352018 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6gnj\" (UniqueName: \"kubernetes.io/projected/3d2e2003-21f3-440a-85dc-1b34c00c6199-kube-api-access-d6gnj\") on node \"crc\" DevicePath \"\"" Feb 18 20:45:03 crc kubenswrapper[4932]: I0218 20:45:03.352037 4932 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d2e2003-21f3-440a-85dc-1b34c00c6199-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:45:03 crc kubenswrapper[4932]: I0218 20:45:03.718802 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb" event={"ID":"3d2e2003-21f3-440a-85dc-1b34c00c6199","Type":"ContainerDied","Data":"3a8a2b0c9977a7299a21c42c295c178472fc2536dbf7ac6ada2076a256f4ee05"} Feb 18 20:45:03 crc kubenswrapper[4932]: I0218 20:45:03.718869 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a8a2b0c9977a7299a21c42c295c178472fc2536dbf7ac6ada2076a256f4ee05" Feb 18 20:45:03 crc kubenswrapper[4932]: I0218 20:45:03.718955 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-q6ngb" Feb 18 20:45:04 crc kubenswrapper[4932]: I0218 20:45:04.239965 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf"] Feb 18 20:45:04 crc kubenswrapper[4932]: I0218 20:45:04.251848 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524080-w6qbf"] Feb 18 20:45:05 crc kubenswrapper[4932]: E0218 20:45:05.183061 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:45:05 crc kubenswrapper[4932]: I0218 20:45:05.197649 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9637eec3-3d3f-435b-9a57-ef318aa5300c" path="/var/lib/kubelet/pods/9637eec3-3d3f-435b-9a57-ef318aa5300c/volumes" Feb 18 20:45:06 crc kubenswrapper[4932]: I0218 20:45:06.986545 4932 scope.go:117] "RemoveContainer" containerID="a9f13f16fae2f188590028710fb520ed99f739785e726a38525e8fd3c5b3e49f" Feb 18 20:45:09 crc kubenswrapper[4932]: E0218 20:45:09.187424 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:45:10 crc kubenswrapper[4932]: E0218 20:45:10.181982 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:45:16 crc kubenswrapper[4932]: E0218 20:45:16.183457 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:45:20 crc kubenswrapper[4932]: E0218 20:45:20.182579 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:45:24 crc kubenswrapper[4932]: E0218 20:45:24.181662 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:45:24 crc kubenswrapper[4932]: E0218 20:45:24.181693 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:45:31 crc kubenswrapper[4932]: E0218 20:45:31.182313 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:45:34 crc kubenswrapper[4932]: E0218 20:45:34.181952 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:45:35 crc kubenswrapper[4932]: E0218 20:45:35.186075 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:45:36 crc kubenswrapper[4932]: E0218 20:45:36.181152 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:45:42 crc kubenswrapper[4932]: E0218 20:45:42.183126 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:45:46 crc kubenswrapper[4932]: E0218 20:45:46.182614 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:45:48 crc kubenswrapper[4932]: E0218 20:45:48.984034 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:45:48 crc kubenswrapper[4932]: E0218 20:45:48.984724 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xz4sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8slcg_openshift-marketplace(57dbf2a4-5676-4291-911d-00038d3c7c75): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:45:48 crc kubenswrapper[4932]: E0218 20:45:48.985995 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:45:50 crc kubenswrapper[4932]: E0218 20:45:50.183061 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:45:54 crc kubenswrapper[4932]: E0218 20:45:54.182713 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:46:01 crc kubenswrapper[4932]: E0218 20:46:01.184199 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:46:03 crc kubenswrapper[4932]: E0218 20:46:03.183093 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:46:04 crc kubenswrapper[4932]: E0218 20:46:04.182665 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:46:07 crc kubenswrapper[4932]: E0218 20:46:07.196709 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:46:16 crc kubenswrapper[4932]: E0218 20:46:16.183682 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:46:16 crc kubenswrapper[4932]: E0218 20:46:16.183705 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:46:16 crc kubenswrapper[4932]: E0218 20:46:16.184665 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:46:22 crc kubenswrapper[4932]: E0218 20:46:22.181538 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:46:27 crc kubenswrapper[4932]: I0218 20:46:27.606643 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:46:27 crc kubenswrapper[4932]: I0218 20:46:27.607415 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:46:29 crc kubenswrapper[4932]: E0218 20:46:29.183130 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:46:29 crc kubenswrapper[4932]: E0218 20:46:29.183512 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:46:29 crc kubenswrapper[4932]: E0218 20:46:29.184169 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:46:32 crc kubenswrapper[4932]: E0218 20:46:32.090515 4932 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.190:59140->38.102.83.190:41227: write tcp 38.102.83.190:59140->38.102.83.190:41227: write: broken pipe Feb 18 20:46:35 crc kubenswrapper[4932]: E0218 20:46:35.183072 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:46:42 crc kubenswrapper[4932]: E0218 20:46:42.181484 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:46:42 crc kubenswrapper[4932]: E0218 20:46:42.873230 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:46:42 crc kubenswrapper[4932]: E0218 20:46:42.873543 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r77nm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pkmxd_openshift-marketplace(9c30675a-a3c0-497c-804a-42c3640846eb): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:46:42 crc kubenswrapper[4932]: E0218 20:46:42.874800 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:46:44 crc kubenswrapper[4932]: E0218 20:46:44.182388 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:46:48 crc kubenswrapper[4932]: E0218 20:46:48.183133 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:46:55 crc kubenswrapper[4932]: E0218 20:46:55.184096 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:46:56 crc kubenswrapper[4932]: E0218 20:46:56.182275 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:46:57 crc kubenswrapper[4932]: I0218 20:46:57.606158 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:46:57 crc kubenswrapper[4932]: I0218 20:46:57.606282 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:46:58 crc kubenswrapper[4932]: E0218 20:46:58.183205 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:47:05 crc kubenswrapper[4932]: E0218 20:47:05.238613 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:47:05 crc kubenswrapper[4932]: E0218 20:47:05.239426 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:20MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9l46c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-s2grr_openshift-marketplace(088aaa53-25ca-48c3-a904-2af0f07e8c2b): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:47:05 crc kubenswrapper[4932]: E0218 20:47:05.240721 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:47:08 crc kubenswrapper[4932]: E0218 20:47:08.182857 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:47:09 crc kubenswrapper[4932]: E0218 20:47:09.182731 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:47:12 crc kubenswrapper[4932]: E0218 20:47:12.182347 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:47:18 crc kubenswrapper[4932]: E0218 20:47:18.184075 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:47:19 crc kubenswrapper[4932]: E0218 20:47:19.196487 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:47:23 crc kubenswrapper[4932]: E0218 20:47:23.184112 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:47:24 crc kubenswrapper[4932]: E0218 20:47:24.180961 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:47:27 crc kubenswrapper[4932]: I0218 20:47:27.605925 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:47:27 crc kubenswrapper[4932]: I0218 20:47:27.606700 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:47:27 crc kubenswrapper[4932]: I0218 20:47:27.606767 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 20:47:27 crc kubenswrapper[4932]: I0218 20:47:27.607907 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:47:27 crc kubenswrapper[4932]: I0218 20:47:27.607998 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" gracePeriod=600 Feb 18 20:47:27 crc kubenswrapper[4932]: E0218 20:47:27.744465 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:47:28 crc kubenswrapper[4932]: I0218 20:47:28.589385 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" exitCode=0 Feb 18 20:47:28 crc kubenswrapper[4932]: I0218 20:47:28.589488 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e"} Feb 18 20:47:28 crc kubenswrapper[4932]: I0218 20:47:28.590563 4932 scope.go:117] "RemoveContainer" containerID="8907f611fdc4bb018a8a5f1574c9b0677d04bcde1f4d724c23d8ea1124f73d8b" Feb 18 20:47:28 crc kubenswrapper[4932]: I0218 20:47:28.597867 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:47:28 crc kubenswrapper[4932]: E0218 20:47:28.599031 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:47:30 crc kubenswrapper[4932]: E0218 20:47:30.182249 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:47:32 crc kubenswrapper[4932]: E0218 20:47:32.182065 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:47:34 crc kubenswrapper[4932]: E0218 20:47:34.183436 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:47:35 crc kubenswrapper[4932]: E0218 20:47:35.182449 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:47:42 crc kubenswrapper[4932]: I0218 20:47:42.179656 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:47:42 crc kubenswrapper[4932]: E0218 20:47:42.181119 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:47:42 crc kubenswrapper[4932]: E0218 20:47:42.183402 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:47:43 crc kubenswrapper[4932]: E0218 20:47:43.185576 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:47:45 crc kubenswrapper[4932]: E0218 20:47:45.181940 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:47:46 crc kubenswrapper[4932]: E0218 20:47:46.181294 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:47:54 crc kubenswrapper[4932]: I0218 20:47:54.179453 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:47:54 crc kubenswrapper[4932]: E0218 20:47:54.180196 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:47:54 crc kubenswrapper[4932]: E0218 20:47:54.184497 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:47:56 crc kubenswrapper[4932]: E0218 20:47:56.183009 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:47:59 crc kubenswrapper[4932]: E0218 20:47:59.184571 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:48:00 crc kubenswrapper[4932]: E0218 20:48:00.181883 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:48:06 crc kubenswrapper[4932]: I0218 20:48:06.179796 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:48:06 crc kubenswrapper[4932]: E0218 20:48:06.180643 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:48:06 crc kubenswrapper[4932]: E0218 20:48:06.182283 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:48:08 crc kubenswrapper[4932]: E0218 20:48:08.197494 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:48:12 crc kubenswrapper[4932]: E0218 20:48:12.182744 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:48:14 crc kubenswrapper[4932]: E0218 20:48:14.184493 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:48:19 crc kubenswrapper[4932]: E0218 20:48:19.182077 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:48:20 crc kubenswrapper[4932]: I0218 20:48:20.179382 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:48:20 crc kubenswrapper[4932]: E0218 20:48:20.180313 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:48:20 crc kubenswrapper[4932]: E0218 20:48:20.180619 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:48:24 crc kubenswrapper[4932]: E0218 20:48:24.181637 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:48:27 crc kubenswrapper[4932]: E0218 20:48:27.192162 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:48:30 crc kubenswrapper[4932]: E0218 20:48:30.183453 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:48:32 crc kubenswrapper[4932]: I0218 20:48:32.179582 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:48:32 crc kubenswrapper[4932]: E0218 20:48:32.180674 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:48:35 crc kubenswrapper[4932]: E0218 20:48:35.183421 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:48:36 crc kubenswrapper[4932]: E0218 20:48:36.181795 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:48:42 crc kubenswrapper[4932]: E0218 20:48:42.180848 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:48:42 crc kubenswrapper[4932]: E0218 20:48:42.181277 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:48:44 crc kubenswrapper[4932]: I0218 20:48:44.180491 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:48:44 crc kubenswrapper[4932]: E0218 20:48:44.180982 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:48:46 crc kubenswrapper[4932]: E0218 20:48:46.186690 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:48:48 crc kubenswrapper[4932]: E0218 20:48:48.184256 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:48:56 crc kubenswrapper[4932]: I0218 20:48:56.181074 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:48:56 crc kubenswrapper[4932]: E0218 20:48:56.182462 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:48:56 crc kubenswrapper[4932]: E0218 20:48:56.183692 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:48:56 crc kubenswrapper[4932]: I0218 20:48:56.184292 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:48:56 crc kubenswrapper[4932]: E0218 20:48:56.713820 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:48:56 crc kubenswrapper[4932]: E0218 20:48:56.714368 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsc85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jhb45_openshift-marketplace(1a69dedd-7666-4739-af80-59d37eedf9b1): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:48:56 crc kubenswrapper[4932]: E0218 20:48:56.717984 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:49:01 crc kubenswrapper[4932]: E0218 20:49:01.181448 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:49:01 crc kubenswrapper[4932]: E0218 20:49:01.181766 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:49:09 crc kubenswrapper[4932]: E0218 20:49:09.182054 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:49:09 crc kubenswrapper[4932]: E0218 20:49:09.182536 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:49:10 crc kubenswrapper[4932]: I0218 20:49:10.179299 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:49:10 crc kubenswrapper[4932]: E0218 20:49:10.179864 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:49:12 crc kubenswrapper[4932]: E0218 20:49:12.182263 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:49:16 crc kubenswrapper[4932]: E0218 20:49:16.183967 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:49:20 crc kubenswrapper[4932]: E0218 20:49:20.181292 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:49:22 crc kubenswrapper[4932]: E0218 20:49:22.183156 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:49:24 crc kubenswrapper[4932]: E0218 20:49:24.182677 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:49:25 crc kubenswrapper[4932]: I0218 20:49:25.180377 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:49:25 crc kubenswrapper[4932]: E0218 20:49:25.180977 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:49:31 crc kubenswrapper[4932]: E0218 20:49:31.184323 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:49:31 crc kubenswrapper[4932]: E0218 20:49:31.184415 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:49:34 crc kubenswrapper[4932]: E0218 20:49:34.180916 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:49:38 crc kubenswrapper[4932]: I0218 20:49:38.178845 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:49:38 crc kubenswrapper[4932]: E0218 20:49:38.179725 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:49:38 crc kubenswrapper[4932]: E0218 20:49:38.182047 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:49:43 crc kubenswrapper[4932]: E0218 20:49:43.181375 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:49:43 crc kubenswrapper[4932]: E0218 20:49:43.182782 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:49:47 crc kubenswrapper[4932]: E0218 20:49:47.192247 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:49:49 crc kubenswrapper[4932]: I0218 20:49:49.179940 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:49:49 crc kubenswrapper[4932]: E0218 20:49:49.180819 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:49:51 crc kubenswrapper[4932]: E0218 20:49:51.183147 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:49:56 crc kubenswrapper[4932]: E0218 20:49:56.182497 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:49:57 crc kubenswrapper[4932]: E0218 20:49:57.200164 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:50:00 crc kubenswrapper[4932]: E0218 20:50:00.181696 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:50:02 crc kubenswrapper[4932]: I0218 20:50:02.179390 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:50:02 crc kubenswrapper[4932]: E0218 20:50:02.180088 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:50:05 crc kubenswrapper[4932]: E0218 20:50:05.184540 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:50:11 crc kubenswrapper[4932]: E0218 20:50:11.181560 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:50:11 crc kubenswrapper[4932]: E0218 20:50:11.181602 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:50:13 crc kubenswrapper[4932]: E0218 20:50:13.180995 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:50:16 crc kubenswrapper[4932]: I0218 20:50:16.179313 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:50:16 crc kubenswrapper[4932]: E0218 20:50:16.179809 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:50:18 crc kubenswrapper[4932]: E0218 20:50:18.182640 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:50:23 crc kubenswrapper[4932]: E0218 20:50:23.184484 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:50:25 crc kubenswrapper[4932]: E0218 20:50:25.182446 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:50:26 crc kubenswrapper[4932]: E0218 20:50:26.180929 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:50:30 crc kubenswrapper[4932]: I0218 20:50:30.179712 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:50:30 crc kubenswrapper[4932]: E0218 20:50:30.180827 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:50:33 crc kubenswrapper[4932]: E0218 20:50:33.183809 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:50:37 crc kubenswrapper[4932]: E0218 20:50:37.189646 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:50:38 crc kubenswrapper[4932]: E0218 20:50:38.182722 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:50:38 crc kubenswrapper[4932]: E0218 20:50:38.182955 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:50:43 crc kubenswrapper[4932]: I0218 20:50:43.180635 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:50:43 crc kubenswrapper[4932]: E0218 20:50:43.181786 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:50:46 crc kubenswrapper[4932]: E0218 20:50:46.181925 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:50:50 crc kubenswrapper[4932]: E0218 20:50:50.181365 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:50:51 crc kubenswrapper[4932]: E0218 20:50:51.183454 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:50:51 crc kubenswrapper[4932]: E0218 20:50:51.606773 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:50:51 crc kubenswrapper[4932]: E0218 20:50:51.606940 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xz4sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8slcg_openshift-marketplace(57dbf2a4-5676-4291-911d-00038d3c7c75): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:50:51 crc kubenswrapper[4932]: E0218 20:50:51.608228 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:50:57 crc kubenswrapper[4932]: I0218 20:50:57.198315 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:50:57 crc kubenswrapper[4932]: E0218 20:50:57.201277 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:51:01 crc kubenswrapper[4932]: E0218 20:51:01.182669 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:51:02 crc kubenswrapper[4932]: E0218 20:51:02.182332 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:51:03 crc kubenswrapper[4932]: E0218 20:51:03.181625 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:51:05 crc kubenswrapper[4932]: E0218 20:51:05.183257 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:51:08 crc kubenswrapper[4932]: I0218 20:51:08.180723 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:51:08 crc kubenswrapper[4932]: E0218 20:51:08.181959 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:51:13 crc kubenswrapper[4932]: E0218 20:51:13.185017 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:51:14 crc kubenswrapper[4932]: E0218 20:51:14.183794 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:51:15 crc kubenswrapper[4932]: E0218 20:51:15.182464 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:51:17 crc kubenswrapper[4932]: E0218 20:51:17.198210 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:51:20 crc kubenswrapper[4932]: I0218 20:51:20.179413 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:51:20 crc kubenswrapper[4932]: E0218 20:51:20.180402 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:51:26 crc kubenswrapper[4932]: E0218 20:51:26.184086 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:51:27 crc kubenswrapper[4932]: E0218 20:51:27.199565 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:51:27 crc kubenswrapper[4932]: E0218 20:51:27.201627 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:51:32 crc kubenswrapper[4932]: E0218 20:51:32.183002 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:51:33 crc kubenswrapper[4932]: I0218 20:51:33.179768 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:51:33 crc kubenswrapper[4932]: E0218 20:51:33.180407 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:51:38 crc kubenswrapper[4932]: E0218 20:51:38.182090 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:51:38 crc kubenswrapper[4932]: E0218 20:51:38.182343 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:51:42 crc kubenswrapper[4932]: E0218 20:51:42.185412 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:51:46 crc kubenswrapper[4932]: E0218 20:51:46.182736 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:51:47 crc kubenswrapper[4932]: I0218 20:51:47.192407 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:51:47 crc kubenswrapper[4932]: E0218 20:51:47.193042 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:51:51 crc kubenswrapper[4932]: E0218 20:51:51.184474 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:51:55 crc kubenswrapper[4932]: E0218 20:51:55.292714 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:51:55 crc kubenswrapper[4932]: E0218 20:51:55.293653 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r77nm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pkmxd_openshift-marketplace(9c30675a-a3c0-497c-804a-42c3640846eb): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:51:55 crc kubenswrapper[4932]: E0218 20:51:55.295249 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:51:56 crc kubenswrapper[4932]: E0218 20:51:56.182297 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:51:59 crc kubenswrapper[4932]: E0218 20:51:59.183356 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:52:01 crc kubenswrapper[4932]: I0218 20:52:01.179316 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:52:01 crc kubenswrapper[4932]: E0218 20:52:01.180164 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:52:06 crc kubenswrapper[4932]: E0218 20:52:06.182462 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:52:08 crc kubenswrapper[4932]: E0218 20:52:08.183071 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:52:10 crc kubenswrapper[4932]: E0218 20:52:10.182327 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:52:10 crc kubenswrapper[4932]: E0218 20:52:10.566732 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:52:10 crc kubenswrapper[4932]: E0218 20:52:10.567160 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:20MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9l46c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-s2grr_openshift-marketplace(088aaa53-25ca-48c3-a904-2af0f07e8c2b): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:52:10 crc kubenswrapper[4932]: E0218 20:52:10.568667 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:52:12 crc kubenswrapper[4932]: I0218 20:52:12.179949 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:52:12 crc kubenswrapper[4932]: E0218 20:52:12.180916 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:52:19 crc kubenswrapper[4932]: E0218 20:52:19.182416 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:52:22 crc kubenswrapper[4932]: E0218 20:52:22.182245 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:52:23 crc kubenswrapper[4932]: E0218 20:52:23.182407 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:52:23 crc kubenswrapper[4932]: E0218 20:52:23.182516 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:52:24 crc kubenswrapper[4932]: I0218 20:52:24.180558 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:52:24 crc kubenswrapper[4932]: E0218 20:52:24.181210 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:52:32 crc kubenswrapper[4932]: E0218 20:52:32.181229 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:52:35 crc kubenswrapper[4932]: E0218 20:52:35.186408 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:52:35 crc kubenswrapper[4932]: E0218 20:52:35.191304 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:52:37 crc kubenswrapper[4932]: E0218 20:52:37.196786 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:52:38 crc kubenswrapper[4932]: I0218 20:52:38.179831 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:52:39 crc kubenswrapper[4932]: I0218 20:52:39.376932 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"bd0ed877dfd0999db86d9dba9c669bbfc3a7d3697f420a1801bd8b9413d63bf6"} Feb 18 20:52:46 crc kubenswrapper[4932]: E0218 20:52:46.184548 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:52:47 crc kubenswrapper[4932]: E0218 20:52:47.198388 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:52:50 crc kubenswrapper[4932]: E0218 20:52:50.182846 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:52:50 crc kubenswrapper[4932]: E0218 20:52:50.184155 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:52:59 crc kubenswrapper[4932]: E0218 20:52:59.183234 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:53:01 crc kubenswrapper[4932]: E0218 20:53:01.183322 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:53:02 crc kubenswrapper[4932]: E0218 20:53:02.183465 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:53:04 crc kubenswrapper[4932]: E0218 20:53:04.181389 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:53:10 crc kubenswrapper[4932]: E0218 20:53:10.183305 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:53:12 crc kubenswrapper[4932]: E0218 20:53:12.180168 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:53:14 crc kubenswrapper[4932]: E0218 20:53:14.182902 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:53:19 crc kubenswrapper[4932]: E0218 20:53:19.183460 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:53:22 crc kubenswrapper[4932]: E0218 20:53:22.182842 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:53:26 crc kubenswrapper[4932]: E0218 20:53:26.184405 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:53:27 crc kubenswrapper[4932]: E0218 20:53:27.190321 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:53:30 crc kubenswrapper[4932]: E0218 20:53:30.182296 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:53:36 crc kubenswrapper[4932]: E0218 20:53:36.182424 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:53:38 crc kubenswrapper[4932]: E0218 20:53:38.182265 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:53:39 crc kubenswrapper[4932]: E0218 20:53:39.181655 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:53:43 crc kubenswrapper[4932]: E0218 20:53:43.192425 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:53:50 crc kubenswrapper[4932]: E0218 20:53:50.181249 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:53:50 crc kubenswrapper[4932]: E0218 20:53:50.181573 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:53:52 crc kubenswrapper[4932]: E0218 20:53:52.182504 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:53:55 crc kubenswrapper[4932]: E0218 20:53:55.183029 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:54:02 crc kubenswrapper[4932]: I0218 20:54:02.181221 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:54:03 crc kubenswrapper[4932]: E0218 20:54:03.181152 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:54:04 crc kubenswrapper[4932]: E0218 20:54:04.181645 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:54:06 crc kubenswrapper[4932]: E0218 20:54:06.112333 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:54:06 crc kubenswrapper[4932]: E0218 20:54:06.112586 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsc85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jhb45_openshift-marketplace(1a69dedd-7666-4739-af80-59d37eedf9b1): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:54:06 crc kubenswrapper[4932]: E0218 20:54:06.113695 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:54:09 crc kubenswrapper[4932]: E0218 20:54:09.181940 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:54:17 crc kubenswrapper[4932]: E0218 20:54:17.196529 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:54:17 crc kubenswrapper[4932]: E0218 20:54:17.197073 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:54:19 crc kubenswrapper[4932]: E0218 20:54:19.182600 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:54:22 crc kubenswrapper[4932]: E0218 20:54:22.181901 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:54:29 crc kubenswrapper[4932]: E0218 20:54:29.182521 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:54:31 crc kubenswrapper[4932]: E0218 20:54:31.182958 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:54:32 crc kubenswrapper[4932]: E0218 20:54:32.182919 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:54:34 crc kubenswrapper[4932]: E0218 20:54:34.181723 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:54:40 crc kubenswrapper[4932]: E0218 20:54:40.193261 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:54:45 crc kubenswrapper[4932]: E0218 20:54:45.188754 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:54:45 crc kubenswrapper[4932]: E0218 20:54:45.188858 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:54:48 crc kubenswrapper[4932]: E0218 20:54:48.181694 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:54:54 crc kubenswrapper[4932]: E0218 20:54:54.190757 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:54:57 crc kubenswrapper[4932]: I0218 20:54:57.606780 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:54:57 crc kubenswrapper[4932]: I0218 20:54:57.607538 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:55:00 crc kubenswrapper[4932]: E0218 20:55:00.184394 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:55:00 crc kubenswrapper[4932]: E0218 20:55:00.184441 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:55:02 crc kubenswrapper[4932]: E0218 20:55:02.182533 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:55:08 crc kubenswrapper[4932]: E0218 20:55:08.182410 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:55:11 crc kubenswrapper[4932]: E0218 20:55:11.183046 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:55:12 crc kubenswrapper[4932]: E0218 20:55:12.183015 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:55:16 crc kubenswrapper[4932]: E0218 20:55:16.182102 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:55:21 crc kubenswrapper[4932]: E0218 20:55:21.183260 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:55:25 crc kubenswrapper[4932]: E0218 20:55:25.187747 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:55:26 crc kubenswrapper[4932]: E0218 20:55:26.183468 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:55:27 crc kubenswrapper[4932]: I0218 20:55:27.606245 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:55:27 crc kubenswrapper[4932]: I0218 20:55:27.606638 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:55:29 crc kubenswrapper[4932]: E0218 20:55:29.182011 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:55:33 crc kubenswrapper[4932]: E0218 20:55:33.186216 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:55:37 crc kubenswrapper[4932]: E0218 20:55:37.198064 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:55:39 crc kubenswrapper[4932]: E0218 20:55:39.181889 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:55:41 crc kubenswrapper[4932]: E0218 20:55:41.182827 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:55:48 crc kubenswrapper[4932]: E0218 20:55:48.183025 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:55:50 crc kubenswrapper[4932]: E0218 20:55:50.181696 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:55:52 crc kubenswrapper[4932]: E0218 20:55:52.182658 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:55:57 crc kubenswrapper[4932]: E0218 20:55:57.391127 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:55:57 crc kubenswrapper[4932]: E0218 20:55:57.391824 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xz4sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8slcg_openshift-marketplace(57dbf2a4-5676-4291-911d-00038d3c7c75): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:55:57 crc kubenswrapper[4932]: E0218 20:55:57.392902 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:55:57 crc kubenswrapper[4932]: I0218 20:55:57.605902 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:55:57 crc kubenswrapper[4932]: I0218 20:55:57.605993 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:55:57 crc kubenswrapper[4932]: I0218 20:55:57.606055 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 20:55:57 crc kubenswrapper[4932]: I0218 20:55:57.607235 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bd0ed877dfd0999db86d9dba9c669bbfc3a7d3697f420a1801bd8b9413d63bf6"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:55:57 crc kubenswrapper[4932]: I0218 20:55:57.607339 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://bd0ed877dfd0999db86d9dba9c669bbfc3a7d3697f420a1801bd8b9413d63bf6" gracePeriod=600 Feb 18 20:55:58 crc kubenswrapper[4932]: I0218 20:55:58.893555 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="bd0ed877dfd0999db86d9dba9c669bbfc3a7d3697f420a1801bd8b9413d63bf6" exitCode=0 Feb 18 20:55:58 crc kubenswrapper[4932]: I0218 20:55:58.893634 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"bd0ed877dfd0999db86d9dba9c669bbfc3a7d3697f420a1801bd8b9413d63bf6"} Feb 18 20:55:58 crc kubenswrapper[4932]: I0218 20:55:58.895421 4932 scope.go:117] "RemoveContainer" containerID="a8f79298ecd501a063a90c150f308c52c80b993ae918e04dab7302976417b03e" Feb 18 20:55:59 crc kubenswrapper[4932]: I0218 20:55:59.911480 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89"} Feb 18 20:56:03 crc kubenswrapper[4932]: E0218 20:56:03.183720 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:56:03 crc kubenswrapper[4932]: E0218 20:56:03.184420 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:56:05 crc kubenswrapper[4932]: E0218 20:56:05.182922 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:56:10 crc kubenswrapper[4932]: E0218 20:56:10.182261 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:56:14 crc kubenswrapper[4932]: E0218 20:56:14.186431 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:56:17 crc kubenswrapper[4932]: E0218 20:56:17.199445 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:56:18 crc kubenswrapper[4932]: E0218 20:56:18.181052 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:56:25 crc kubenswrapper[4932]: E0218 20:56:25.183034 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:56:28 crc kubenswrapper[4932]: E0218 20:56:28.183628 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:56:32 crc kubenswrapper[4932]: E0218 20:56:32.182548 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:56:33 crc kubenswrapper[4932]: E0218 20:56:33.190524 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:56:36 crc kubenswrapper[4932]: E0218 20:56:36.182952 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:56:40 crc kubenswrapper[4932]: E0218 20:56:40.182007 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:56:44 crc kubenswrapper[4932]: E0218 20:56:44.189985 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:56:44 crc kubenswrapper[4932]: E0218 20:56:44.190067 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:56:48 crc kubenswrapper[4932]: E0218 20:56:48.183412 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:56:55 crc kubenswrapper[4932]: E0218 20:56:55.182023 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:56:58 crc kubenswrapper[4932]: E0218 20:56:58.182509 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:56:59 crc kubenswrapper[4932]: E0218 20:56:59.181498 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:57:00 crc kubenswrapper[4932]: E0218 20:57:00.383363 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:57:00 crc kubenswrapper[4932]: E0218 20:57:00.383888 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r77nm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pkmxd_openshift-marketplace(9c30675a-a3c0-497c-804a-42c3640846eb): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:57:00 crc kubenswrapper[4932]: E0218 20:57:00.385145 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:57:06 crc kubenswrapper[4932]: E0218 20:57:06.181761 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:57:13 crc kubenswrapper[4932]: E0218 20:57:13.181555 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:57:14 crc kubenswrapper[4932]: E0218 20:57:14.182335 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:57:15 crc kubenswrapper[4932]: E0218 20:57:15.896097 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:57:15 crc kubenswrapper[4932]: E0218 20:57:15.896882 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:20MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9l46c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-s2grr_openshift-marketplace(088aaa53-25ca-48c3-a904-2af0f07e8c2b): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:57:15 crc kubenswrapper[4932]: E0218 20:57:15.898275 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:57:18 crc kubenswrapper[4932]: E0218 20:57:18.183070 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:57:25 crc kubenswrapper[4932]: E0218 20:57:25.184072 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:57:26 crc kubenswrapper[4932]: E0218 20:57:26.183237 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:57:27 crc kubenswrapper[4932]: E0218 20:57:27.198757 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:57:29 crc kubenswrapper[4932]: E0218 20:57:29.184046 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:57:38 crc kubenswrapper[4932]: E0218 20:57:38.183691 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:57:40 crc kubenswrapper[4932]: E0218 20:57:40.187622 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:57:42 crc kubenswrapper[4932]: E0218 20:57:42.181891 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:57:42 crc kubenswrapper[4932]: E0218 20:57:42.182424 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:57:49 crc kubenswrapper[4932]: E0218 20:57:49.182802 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:57:51 crc kubenswrapper[4932]: E0218 20:57:51.184396 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:57:53 crc kubenswrapper[4932]: E0218 20:57:53.183152 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:57:53 crc kubenswrapper[4932]: E0218 20:57:53.183225 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:58:04 crc kubenswrapper[4932]: E0218 20:58:04.188319 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:58:04 crc kubenswrapper[4932]: E0218 20:58:04.188356 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:58:04 crc kubenswrapper[4932]: E0218 20:58:04.188514 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:58:05 crc kubenswrapper[4932]: E0218 20:58:05.184313 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:58:16 crc kubenswrapper[4932]: E0218 20:58:16.182731 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:58:17 crc kubenswrapper[4932]: E0218 20:58:17.192133 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:58:18 crc kubenswrapper[4932]: E0218 20:58:18.181688 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:58:18 crc kubenswrapper[4932]: E0218 20:58:18.183668 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:58:27 crc kubenswrapper[4932]: E0218 20:58:27.197851 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:58:27 crc kubenswrapper[4932]: I0218 20:58:27.605843 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:58:27 crc kubenswrapper[4932]: I0218 20:58:27.606343 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:58:28 crc kubenswrapper[4932]: E0218 20:58:28.182556 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:58:30 crc kubenswrapper[4932]: E0218 20:58:30.181496 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:58:33 crc kubenswrapper[4932]: E0218 20:58:33.182784 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:58:39 crc kubenswrapper[4932]: E0218 20:58:39.184162 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:58:42 crc kubenswrapper[4932]: E0218 20:58:42.181474 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:58:44 crc kubenswrapper[4932]: E0218 20:58:44.183951 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:58:48 crc kubenswrapper[4932]: E0218 20:58:48.182553 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:58:53 crc kubenswrapper[4932]: E0218 20:58:53.183561 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:58:55 crc kubenswrapper[4932]: E0218 20:58:55.196024 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:58:56 crc kubenswrapper[4932]: E0218 20:58:56.183321 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:58:57 crc kubenswrapper[4932]: I0218 20:58:57.605924 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:58:57 crc kubenswrapper[4932]: I0218 20:58:57.606371 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:59:00 crc kubenswrapper[4932]: E0218 20:59:00.185459 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:59:06 crc kubenswrapper[4932]: E0218 20:59:06.182480 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:59:09 crc kubenswrapper[4932]: E0218 20:59:09.182735 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:59:11 crc kubenswrapper[4932]: I0218 20:59:11.182986 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:59:11 crc kubenswrapper[4932]: E0218 20:59:11.588726 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: Requesting bearer token: invalid status code from registry 502 (Bad Gateway)" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:59:11 crc kubenswrapper[4932]: E0218 20:59:11.589082 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsc85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jhb45_openshift-marketplace(1a69dedd-7666-4739-af80-59d37eedf9b1): ErrImagePull: initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: Requesting bearer token: invalid status code from registry 502 (Bad Gateway)" logger="UnhandledError" Feb 18 20:59:11 crc kubenswrapper[4932]: E0218 20:59:11.591055 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: Requesting bearer token: invalid status code from registry 502 (Bad Gateway)\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:59:13 crc kubenswrapper[4932]: E0218 20:59:13.185740 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:59:19 crc kubenswrapper[4932]: E0218 20:59:19.182547 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:59:24 crc kubenswrapper[4932]: E0218 20:59:24.184093 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:59:25 crc kubenswrapper[4932]: E0218 20:59:25.185829 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:59:27 crc kubenswrapper[4932]: I0218 20:59:27.606691 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:59:27 crc kubenswrapper[4932]: I0218 20:59:27.607108 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:59:27 crc kubenswrapper[4932]: I0218 20:59:27.607216 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 20:59:27 crc kubenswrapper[4932]: I0218 20:59:27.608099 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:59:27 crc kubenswrapper[4932]: I0218 20:59:27.608222 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" gracePeriod=600 Feb 18 20:59:27 crc kubenswrapper[4932]: E0218 20:59:27.734550 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:59:28 crc kubenswrapper[4932]: E0218 20:59:28.180936 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:59:28 crc kubenswrapper[4932]: I0218 20:59:28.458048 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" exitCode=0 Feb 18 20:59:28 crc kubenswrapper[4932]: I0218 20:59:28.458119 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89"} Feb 18 20:59:28 crc kubenswrapper[4932]: I0218 20:59:28.458243 4932 scope.go:117] "RemoveContainer" containerID="bd0ed877dfd0999db86d9dba9c669bbfc3a7d3697f420a1801bd8b9413d63bf6" Feb 18 20:59:28 crc kubenswrapper[4932]: I0218 20:59:28.459284 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 20:59:28 crc kubenswrapper[4932]: E0218 20:59:28.459864 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:59:34 crc kubenswrapper[4932]: E0218 20:59:34.184425 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:59:39 crc kubenswrapper[4932]: E0218 20:59:39.182368 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:59:39 crc kubenswrapper[4932]: E0218 20:59:39.182607 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:59:40 crc kubenswrapper[4932]: E0218 20:59:40.182379 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:59:42 crc kubenswrapper[4932]: I0218 20:59:42.179541 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 20:59:42 crc kubenswrapper[4932]: E0218 20:59:42.180112 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:59:45 crc kubenswrapper[4932]: E0218 20:59:45.181532 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 20:59:49 crc kubenswrapper[4932]: I0218 20:59:49.250940 4932 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-76d44d77c9-sdq6t" podUID="d359b774-654c-4532-8f81-e1beddd68479" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 18 20:59:50 crc kubenswrapper[4932]: E0218 20:59:50.184232 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 20:59:51 crc kubenswrapper[4932]: E0218 20:59:51.182355 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 20:59:53 crc kubenswrapper[4932]: I0218 20:59:53.179890 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 20:59:53 crc kubenswrapper[4932]: E0218 20:59:53.180821 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 20:59:54 crc kubenswrapper[4932]: E0218 20:59:54.182929 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 20:59:57 crc kubenswrapper[4932]: E0218 20:59:57.187380 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:00:00 crc kubenswrapper[4932]: I0218 21:00:00.172448 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q"] Feb 18 21:00:00 crc kubenswrapper[4932]: E0218 21:00:00.212978 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d2e2003-21f3-440a-85dc-1b34c00c6199" containerName="collect-profiles" Feb 18 21:00:00 crc kubenswrapper[4932]: I0218 21:00:00.213020 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d2e2003-21f3-440a-85dc-1b34c00c6199" containerName="collect-profiles" Feb 18 21:00:00 crc kubenswrapper[4932]: I0218 21:00:00.213451 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d2e2003-21f3-440a-85dc-1b34c00c6199" containerName="collect-profiles" Feb 18 21:00:00 crc kubenswrapper[4932]: I0218 21:00:00.214282 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q"] Feb 18 21:00:00 crc kubenswrapper[4932]: I0218 21:00:00.214432 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q" Feb 18 21:00:00 crc kubenswrapper[4932]: I0218 21:00:00.216387 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 21:00:00 crc kubenswrapper[4932]: I0218 21:00:00.216948 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 21:00:00 crc kubenswrapper[4932]: I0218 21:00:00.293045 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgz97\" (UniqueName: \"kubernetes.io/projected/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-kube-api-access-qgz97\") pod \"collect-profiles-29524140-dr57q\" (UID: \"f40b15f6-29e9-4312-a3d6-b41afbbe13ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q" Feb 18 21:00:00 crc kubenswrapper[4932]: I0218 21:00:00.293751 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-secret-volume\") pod \"collect-profiles-29524140-dr57q\" (UID: \"f40b15f6-29e9-4312-a3d6-b41afbbe13ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q" Feb 18 21:00:00 crc kubenswrapper[4932]: I0218 21:00:00.293884 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-config-volume\") pod \"collect-profiles-29524140-dr57q\" (UID: \"f40b15f6-29e9-4312-a3d6-b41afbbe13ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q" Feb 18 21:00:00 crc kubenswrapper[4932]: I0218 21:00:00.395527 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-config-volume\") pod \"collect-profiles-29524140-dr57q\" (UID: \"f40b15f6-29e9-4312-a3d6-b41afbbe13ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q" Feb 18 21:00:00 crc kubenswrapper[4932]: I0218 21:00:00.396052 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgz97\" (UniqueName: \"kubernetes.io/projected/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-kube-api-access-qgz97\") pod \"collect-profiles-29524140-dr57q\" (UID: \"f40b15f6-29e9-4312-a3d6-b41afbbe13ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q" Feb 18 21:00:00 crc kubenswrapper[4932]: I0218 21:00:00.396203 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-secret-volume\") pod \"collect-profiles-29524140-dr57q\" (UID: \"f40b15f6-29e9-4312-a3d6-b41afbbe13ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q" Feb 18 21:00:00 crc kubenswrapper[4932]: I0218 21:00:00.397352 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-config-volume\") pod \"collect-profiles-29524140-dr57q\" (UID: \"f40b15f6-29e9-4312-a3d6-b41afbbe13ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q" Feb 18 21:00:00 crc kubenswrapper[4932]: I0218 21:00:00.403508 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-secret-volume\") pod \"collect-profiles-29524140-dr57q\" (UID: \"f40b15f6-29e9-4312-a3d6-b41afbbe13ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q" Feb 18 21:00:00 crc kubenswrapper[4932]: I0218 21:00:00.411809 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgz97\" (UniqueName: \"kubernetes.io/projected/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-kube-api-access-qgz97\") pod \"collect-profiles-29524140-dr57q\" (UID: \"f40b15f6-29e9-4312-a3d6-b41afbbe13ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q" Feb 18 21:00:00 crc kubenswrapper[4932]: I0218 21:00:00.533309 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q" Feb 18 21:00:01 crc kubenswrapper[4932]: I0218 21:00:01.096201 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q"] Feb 18 21:00:01 crc kubenswrapper[4932]: E0218 21:00:01.183428 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:00:01 crc kubenswrapper[4932]: I0218 21:00:01.848315 4932 generic.go:334] "Generic (PLEG): container finished" podID="f40b15f6-29e9-4312-a3d6-b41afbbe13ee" containerID="54724fd9ba60888417cdeadea1f3c9160e76f53fb713dab5b7e78b0a664a686b" exitCode=0 Feb 18 21:00:01 crc kubenswrapper[4932]: I0218 21:00:01.848405 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q" event={"ID":"f40b15f6-29e9-4312-a3d6-b41afbbe13ee","Type":"ContainerDied","Data":"54724fd9ba60888417cdeadea1f3c9160e76f53fb713dab5b7e78b0a664a686b"} Feb 18 21:00:01 crc kubenswrapper[4932]: I0218 21:00:01.848561 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q" event={"ID":"f40b15f6-29e9-4312-a3d6-b41afbbe13ee","Type":"ContainerStarted","Data":"9be1a18aeb4dd58c3029304908e8ab2aae86f2e97d6fbd794eb63bdb53daa635"} Feb 18 21:00:02 crc kubenswrapper[4932]: E0218 21:00:02.182151 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:00:03 crc kubenswrapper[4932]: I0218 21:00:03.255883 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q" Feb 18 21:00:03 crc kubenswrapper[4932]: I0218 21:00:03.358338 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgz97\" (UniqueName: \"kubernetes.io/projected/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-kube-api-access-qgz97\") pod \"f40b15f6-29e9-4312-a3d6-b41afbbe13ee\" (UID: \"f40b15f6-29e9-4312-a3d6-b41afbbe13ee\") " Feb 18 21:00:03 crc kubenswrapper[4932]: I0218 21:00:03.358406 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-secret-volume\") pod \"f40b15f6-29e9-4312-a3d6-b41afbbe13ee\" (UID: \"f40b15f6-29e9-4312-a3d6-b41afbbe13ee\") " Feb 18 21:00:03 crc kubenswrapper[4932]: I0218 21:00:03.358591 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-config-volume\") pod \"f40b15f6-29e9-4312-a3d6-b41afbbe13ee\" (UID: \"f40b15f6-29e9-4312-a3d6-b41afbbe13ee\") " Feb 18 21:00:03 crc kubenswrapper[4932]: I0218 21:00:03.360580 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-config-volume" (OuterVolumeSpecName: "config-volume") pod "f40b15f6-29e9-4312-a3d6-b41afbbe13ee" (UID: "f40b15f6-29e9-4312-a3d6-b41afbbe13ee"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 21:00:03 crc kubenswrapper[4932]: I0218 21:00:03.365148 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f40b15f6-29e9-4312-a3d6-b41afbbe13ee" (UID: "f40b15f6-29e9-4312-a3d6-b41afbbe13ee"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 21:00:03 crc kubenswrapper[4932]: I0218 21:00:03.365428 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-kube-api-access-qgz97" (OuterVolumeSpecName: "kube-api-access-qgz97") pod "f40b15f6-29e9-4312-a3d6-b41afbbe13ee" (UID: "f40b15f6-29e9-4312-a3d6-b41afbbe13ee"). InnerVolumeSpecName "kube-api-access-qgz97". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 21:00:03 crc kubenswrapper[4932]: I0218 21:00:03.461731 4932 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 21:00:03 crc kubenswrapper[4932]: I0218 21:00:03.462068 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgz97\" (UniqueName: \"kubernetes.io/projected/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-kube-api-access-qgz97\") on node \"crc\" DevicePath \"\"" Feb 18 21:00:03 crc kubenswrapper[4932]: I0218 21:00:03.462107 4932 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f40b15f6-29e9-4312-a3d6-b41afbbe13ee-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 21:00:03 crc kubenswrapper[4932]: I0218 21:00:03.865165 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q" event={"ID":"f40b15f6-29e9-4312-a3d6-b41afbbe13ee","Type":"ContainerDied","Data":"9be1a18aeb4dd58c3029304908e8ab2aae86f2e97d6fbd794eb63bdb53daa635"} Feb 18 21:00:03 crc kubenswrapper[4932]: I0218 21:00:03.865219 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9be1a18aeb4dd58c3029304908e8ab2aae86f2e97d6fbd794eb63bdb53daa635" Feb 18 21:00:03 crc kubenswrapper[4932]: I0218 21:00:03.865268 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524140-dr57q" Feb 18 21:00:04 crc kubenswrapper[4932]: I0218 21:00:04.351864 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl"] Feb 18 21:00:04 crc kubenswrapper[4932]: I0218 21:00:04.359590 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524095-k4shl"] Feb 18 21:00:05 crc kubenswrapper[4932]: I0218 21:00:05.211635 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43cf3e74-b4e7-4f54-b21c-cf9018235782" path="/var/lib/kubelet/pods/43cf3e74-b4e7-4f54-b21c-cf9018235782/volumes" Feb 18 21:00:07 crc kubenswrapper[4932]: I0218 21:00:07.410047 4932 scope.go:117] "RemoveContainer" containerID="b79875069ecc1431ced41ee0aadf13bbe89c7ee6b34078234cc6eb1c6d79dd0b" Feb 18 21:00:08 crc kubenswrapper[4932]: I0218 21:00:08.179909 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:00:08 crc kubenswrapper[4932]: E0218 21:00:08.180337 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:00:09 crc kubenswrapper[4932]: E0218 21:00:09.184153 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:00:09 crc kubenswrapper[4932]: E0218 21:00:09.184712 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:00:15 crc kubenswrapper[4932]: E0218 21:00:15.186617 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:00:16 crc kubenswrapper[4932]: E0218 21:00:16.181225 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:00:21 crc kubenswrapper[4932]: E0218 21:00:21.181309 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:00:21 crc kubenswrapper[4932]: E0218 21:00:21.181340 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:00:22 crc kubenswrapper[4932]: I0218 21:00:22.190831 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:00:22 crc kubenswrapper[4932]: E0218 21:00:22.207821 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:00:29 crc kubenswrapper[4932]: E0218 21:00:29.184008 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:00:31 crc kubenswrapper[4932]: E0218 21:00:31.181672 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:00:32 crc kubenswrapper[4932]: E0218 21:00:32.183995 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:00:34 crc kubenswrapper[4932]: E0218 21:00:34.183263 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:00:35 crc kubenswrapper[4932]: I0218 21:00:35.180491 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:00:35 crc kubenswrapper[4932]: E0218 21:00:35.181754 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:00:41 crc kubenswrapper[4932]: E0218 21:00:41.183897 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:00:43 crc kubenswrapper[4932]: E0218 21:00:43.183313 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:00:45 crc kubenswrapper[4932]: E0218 21:00:45.185849 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:00:46 crc kubenswrapper[4932]: I0218 21:00:46.180150 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:00:46 crc kubenswrapper[4932]: E0218 21:00:46.180837 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:00:46 crc kubenswrapper[4932]: E0218 21:00:46.183056 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:00:55 crc kubenswrapper[4932]: E0218 21:00:55.183378 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:00:55 crc kubenswrapper[4932]: E0218 21:00:55.183472 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:00:56 crc kubenswrapper[4932]: E0218 21:00:56.184862 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:00:58 crc kubenswrapper[4932]: I0218 21:00:58.180368 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:00:58 crc kubenswrapper[4932]: E0218 21:00:58.180841 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.164627 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29524141-46g96"] Feb 18 21:01:00 crc kubenswrapper[4932]: E0218 21:01:00.165556 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f40b15f6-29e9-4312-a3d6-b41afbbe13ee" containerName="collect-profiles" Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.165569 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="f40b15f6-29e9-4312-a3d6-b41afbbe13ee" containerName="collect-profiles" Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.165768 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="f40b15f6-29e9-4312-a3d6-b41afbbe13ee" containerName="collect-profiles" Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.166625 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524141-46g96" Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.180376 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524141-46g96"] Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.308303 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-combined-ca-bundle\") pod \"keystone-cron-29524141-46g96\" (UID: \"39b3cde9-8940-4757-8073-9f90910d6a30\") " pod="openstack/keystone-cron-29524141-46g96" Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.308594 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-config-data\") pod \"keystone-cron-29524141-46g96\" (UID: \"39b3cde9-8940-4757-8073-9f90910d6a30\") " pod="openstack/keystone-cron-29524141-46g96" Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.308627 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-fernet-keys\") pod \"keystone-cron-29524141-46g96\" (UID: \"39b3cde9-8940-4757-8073-9f90910d6a30\") " pod="openstack/keystone-cron-29524141-46g96" Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.308670 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvrf7\" (UniqueName: \"kubernetes.io/projected/39b3cde9-8940-4757-8073-9f90910d6a30-kube-api-access-tvrf7\") pod \"keystone-cron-29524141-46g96\" (UID: \"39b3cde9-8940-4757-8073-9f90910d6a30\") " pod="openstack/keystone-cron-29524141-46g96" Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.410515 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-config-data\") pod \"keystone-cron-29524141-46g96\" (UID: \"39b3cde9-8940-4757-8073-9f90910d6a30\") " pod="openstack/keystone-cron-29524141-46g96" Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.410565 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-fernet-keys\") pod \"keystone-cron-29524141-46g96\" (UID: \"39b3cde9-8940-4757-8073-9f90910d6a30\") " pod="openstack/keystone-cron-29524141-46g96" Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.410606 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvrf7\" (UniqueName: \"kubernetes.io/projected/39b3cde9-8940-4757-8073-9f90910d6a30-kube-api-access-tvrf7\") pod \"keystone-cron-29524141-46g96\" (UID: \"39b3cde9-8940-4757-8073-9f90910d6a30\") " pod="openstack/keystone-cron-29524141-46g96" Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.410690 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-combined-ca-bundle\") pod \"keystone-cron-29524141-46g96\" (UID: \"39b3cde9-8940-4757-8073-9f90910d6a30\") " pod="openstack/keystone-cron-29524141-46g96" Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.418665 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-config-data\") pod \"keystone-cron-29524141-46g96\" (UID: \"39b3cde9-8940-4757-8073-9f90910d6a30\") " pod="openstack/keystone-cron-29524141-46g96" Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.421638 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-fernet-keys\") pod \"keystone-cron-29524141-46g96\" (UID: \"39b3cde9-8940-4757-8073-9f90910d6a30\") " pod="openstack/keystone-cron-29524141-46g96" Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.425208 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-combined-ca-bundle\") pod \"keystone-cron-29524141-46g96\" (UID: \"39b3cde9-8940-4757-8073-9f90910d6a30\") " pod="openstack/keystone-cron-29524141-46g96" Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.446912 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvrf7\" (UniqueName: \"kubernetes.io/projected/39b3cde9-8940-4757-8073-9f90910d6a30-kube-api-access-tvrf7\") pod \"keystone-cron-29524141-46g96\" (UID: \"39b3cde9-8940-4757-8073-9f90910d6a30\") " pod="openstack/keystone-cron-29524141-46g96" Feb 18 21:01:00 crc kubenswrapper[4932]: I0218 21:01:00.489252 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524141-46g96" Feb 18 21:01:01 crc kubenswrapper[4932]: I0218 21:01:01.063135 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524141-46g96"] Feb 18 21:01:01 crc kubenswrapper[4932]: I0218 21:01:01.588878 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524141-46g96" event={"ID":"39b3cde9-8940-4757-8073-9f90910d6a30","Type":"ContainerStarted","Data":"d2fd48309e425e11937534e0fcf43aa043e873c73ab1c12d5e2f7045fa7f9139"} Feb 18 21:01:01 crc kubenswrapper[4932]: I0218 21:01:01.589344 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524141-46g96" event={"ID":"39b3cde9-8940-4757-8073-9f90910d6a30","Type":"ContainerStarted","Data":"c5d586f009677da884c30ca0caa506073f0c1a94d127ca41f5c3693329f132b6"} Feb 18 21:01:05 crc kubenswrapper[4932]: I0218 21:01:05.651922 4932 generic.go:334] "Generic (PLEG): container finished" podID="39b3cde9-8940-4757-8073-9f90910d6a30" containerID="d2fd48309e425e11937534e0fcf43aa043e873c73ab1c12d5e2f7045fa7f9139" exitCode=0 Feb 18 21:01:05 crc kubenswrapper[4932]: I0218 21:01:05.652009 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524141-46g96" event={"ID":"39b3cde9-8940-4757-8073-9f90910d6a30","Type":"ContainerDied","Data":"d2fd48309e425e11937534e0fcf43aa043e873c73ab1c12d5e2f7045fa7f9139"} Feb 18 21:01:07 crc kubenswrapper[4932]: E0218 21:01:07.192836 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:01:07 crc kubenswrapper[4932]: I0218 21:01:07.211009 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524141-46g96" Feb 18 21:01:07 crc kubenswrapper[4932]: I0218 21:01:07.282838 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-fernet-keys\") pod \"39b3cde9-8940-4757-8073-9f90910d6a30\" (UID: \"39b3cde9-8940-4757-8073-9f90910d6a30\") " Feb 18 21:01:07 crc kubenswrapper[4932]: I0218 21:01:07.283431 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-combined-ca-bundle\") pod \"39b3cde9-8940-4757-8073-9f90910d6a30\" (UID: \"39b3cde9-8940-4757-8073-9f90910d6a30\") " Feb 18 21:01:07 crc kubenswrapper[4932]: I0218 21:01:07.283657 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvrf7\" (UniqueName: \"kubernetes.io/projected/39b3cde9-8940-4757-8073-9f90910d6a30-kube-api-access-tvrf7\") pod \"39b3cde9-8940-4757-8073-9f90910d6a30\" (UID: \"39b3cde9-8940-4757-8073-9f90910d6a30\") " Feb 18 21:01:07 crc kubenswrapper[4932]: I0218 21:01:07.283731 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-config-data\") pod \"39b3cde9-8940-4757-8073-9f90910d6a30\" (UID: \"39b3cde9-8940-4757-8073-9f90910d6a30\") " Feb 18 21:01:07 crc kubenswrapper[4932]: I0218 21:01:07.302523 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39b3cde9-8940-4757-8073-9f90910d6a30-kube-api-access-tvrf7" (OuterVolumeSpecName: "kube-api-access-tvrf7") pod "39b3cde9-8940-4757-8073-9f90910d6a30" (UID: "39b3cde9-8940-4757-8073-9f90910d6a30"). InnerVolumeSpecName "kube-api-access-tvrf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 21:01:07 crc kubenswrapper[4932]: I0218 21:01:07.305317 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "39b3cde9-8940-4757-8073-9f90910d6a30" (UID: "39b3cde9-8940-4757-8073-9f90910d6a30"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 21:01:07 crc kubenswrapper[4932]: I0218 21:01:07.324058 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39b3cde9-8940-4757-8073-9f90910d6a30" (UID: "39b3cde9-8940-4757-8073-9f90910d6a30"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 21:01:07 crc kubenswrapper[4932]: I0218 21:01:07.381807 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-config-data" (OuterVolumeSpecName: "config-data") pod "39b3cde9-8940-4757-8073-9f90910d6a30" (UID: "39b3cde9-8940-4757-8073-9f90910d6a30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 21:01:07 crc kubenswrapper[4932]: I0218 21:01:07.387696 4932 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 21:01:07 crc kubenswrapper[4932]: I0218 21:01:07.387742 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvrf7\" (UniqueName: \"kubernetes.io/projected/39b3cde9-8940-4757-8073-9f90910d6a30-kube-api-access-tvrf7\") on node \"crc\" DevicePath \"\"" Feb 18 21:01:07 crc kubenswrapper[4932]: I0218 21:01:07.387761 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 21:01:07 crc kubenswrapper[4932]: I0218 21:01:07.387775 4932 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39b3cde9-8940-4757-8073-9f90910d6a30-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 21:01:07 crc kubenswrapper[4932]: E0218 21:01:07.406088 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 502 Bad Gateway" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 21:01:07 crc kubenswrapper[4932]: E0218 21:01:07.406280 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xz4sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8slcg_openshift-marketplace(57dbf2a4-5676-4291-911d-00038d3c7c75): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 502 Bad Gateway" logger="UnhandledError" Feb 18 21:01:07 crc kubenswrapper[4932]: E0218 21:01:07.407513 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 502 Bad Gateway\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:01:07 crc kubenswrapper[4932]: I0218 21:01:07.682817 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524141-46g96" event={"ID":"39b3cde9-8940-4757-8073-9f90910d6a30","Type":"ContainerDied","Data":"c5d586f009677da884c30ca0caa506073f0c1a94d127ca41f5c3693329f132b6"} Feb 18 21:01:07 crc kubenswrapper[4932]: I0218 21:01:07.682874 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5d586f009677da884c30ca0caa506073f0c1a94d127ca41f5c3693329f132b6" Feb 18 21:01:07 crc kubenswrapper[4932]: I0218 21:01:07.682927 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524141-46g96" Feb 18 21:01:08 crc kubenswrapper[4932]: E0218 21:01:08.183374 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:01:11 crc kubenswrapper[4932]: I0218 21:01:11.179561 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:01:11 crc kubenswrapper[4932]: E0218 21:01:11.180640 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:01:11 crc kubenswrapper[4932]: E0218 21:01:11.184663 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:01:18 crc kubenswrapper[4932]: E0218 21:01:18.183403 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:01:21 crc kubenswrapper[4932]: E0218 21:01:21.184202 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:01:23 crc kubenswrapper[4932]: I0218 21:01:23.180409 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:01:23 crc kubenswrapper[4932]: E0218 21:01:23.181301 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:01:23 crc kubenswrapper[4932]: E0218 21:01:23.186289 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:01:26 crc kubenswrapper[4932]: E0218 21:01:26.184039 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:01:29 crc kubenswrapper[4932]: E0218 21:01:29.183695 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:01:33 crc kubenswrapper[4932]: E0218 21:01:33.184918 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:01:35 crc kubenswrapper[4932]: E0218 21:01:35.186271 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:01:37 crc kubenswrapper[4932]: I0218 21:01:37.190846 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:01:37 crc kubenswrapper[4932]: E0218 21:01:37.192237 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:01:40 crc kubenswrapper[4932]: E0218 21:01:40.182282 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:01:44 crc kubenswrapper[4932]: E0218 21:01:44.182461 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:01:44 crc kubenswrapper[4932]: E0218 21:01:44.182577 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:01:48 crc kubenswrapper[4932]: E0218 21:01:48.183222 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:01:51 crc kubenswrapper[4932]: I0218 21:01:51.180842 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:01:51 crc kubenswrapper[4932]: E0218 21:01:51.182002 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:01:51 crc kubenswrapper[4932]: E0218 21:01:51.184698 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:01:56 crc kubenswrapper[4932]: E0218 21:01:56.184056 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:01:56 crc kubenswrapper[4932]: E0218 21:01:56.184098 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:02:00 crc kubenswrapper[4932]: E0218 21:02:00.182799 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:02:02 crc kubenswrapper[4932]: I0218 21:02:02.180551 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:02:02 crc kubenswrapper[4932]: E0218 21:02:02.181356 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:02:05 crc kubenswrapper[4932]: E0218 21:02:05.807822 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:02:07 crc kubenswrapper[4932]: E0218 21:02:07.195693 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:02:13 crc kubenswrapper[4932]: E0218 21:02:13.182979 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:02:17 crc kubenswrapper[4932]: I0218 21:02:17.191825 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:02:17 crc kubenswrapper[4932]: E0218 21:02:17.192696 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:02:18 crc kubenswrapper[4932]: E0218 21:02:18.183065 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:02:19 crc kubenswrapper[4932]: E0218 21:02:19.386503 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: reading manifest v4.18 in registry.redhat.io/redhat/redhat-operator-index: received unexpected HTTP status: 504 Gateway Timeout" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 21:02:19 crc kubenswrapper[4932]: E0218 21:02:19.386894 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r77nm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pkmxd_openshift-marketplace(9c30675a-a3c0-497c-804a-42c3640846eb): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: reading manifest v4.18 in registry.redhat.io/redhat/redhat-operator-index: received unexpected HTTP status: 504 Gateway Timeout" logger="UnhandledError" Feb 18 21:02:19 crc kubenswrapper[4932]: E0218 21:02:19.388146 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: reading manifest v4.18 in registry.redhat.io/redhat/redhat-operator-index: received unexpected HTTP status: 504 Gateway Timeout\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:02:24 crc kubenswrapper[4932]: E0218 21:02:24.182656 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:02:30 crc kubenswrapper[4932]: I0218 21:02:30.180112 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:02:30 crc kubenswrapper[4932]: E0218 21:02:30.180857 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:02:32 crc kubenswrapper[4932]: E0218 21:02:32.183793 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:02:34 crc kubenswrapper[4932]: E0218 21:02:34.183274 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:02:35 crc kubenswrapper[4932]: E0218 21:02:35.188878 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:02:36 crc kubenswrapper[4932]: E0218 21:02:36.291344 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 21:02:36 crc kubenswrapper[4932]: E0218 21:02:36.291956 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:20MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9l46c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-s2grr_openshift-marketplace(088aaa53-25ca-48c3-a904-2af0f07e8c2b): ErrImagePull: initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" logger="UnhandledError" Feb 18 21:02:36 crc kubenswrapper[4932]: E0218 21:02:36.293462 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:02:42 crc kubenswrapper[4932]: I0218 21:02:42.179440 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:02:42 crc kubenswrapper[4932]: E0218 21:02:42.180943 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:02:44 crc kubenswrapper[4932]: E0218 21:02:44.183240 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:02:46 crc kubenswrapper[4932]: E0218 21:02:46.180465 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:02:48 crc kubenswrapper[4932]: E0218 21:02:48.183055 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:02:49 crc kubenswrapper[4932]: E0218 21:02:49.182630 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:02:57 crc kubenswrapper[4932]: I0218 21:02:57.193684 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:02:57 crc kubenswrapper[4932]: E0218 21:02:57.195047 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:02:58 crc kubenswrapper[4932]: E0218 21:02:58.182890 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:02:59 crc kubenswrapper[4932]: E0218 21:02:59.182924 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:03:00 crc kubenswrapper[4932]: E0218 21:03:00.185110 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:03:01 crc kubenswrapper[4932]: E0218 21:03:01.183135 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:03:09 crc kubenswrapper[4932]: E0218 21:03:09.185262 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:03:11 crc kubenswrapper[4932]: I0218 21:03:11.181771 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:03:11 crc kubenswrapper[4932]: E0218 21:03:11.182535 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:03:11 crc kubenswrapper[4932]: E0218 21:03:11.183702 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:03:15 crc kubenswrapper[4932]: E0218 21:03:15.187468 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:03:15 crc kubenswrapper[4932]: E0218 21:03:15.187635 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:03:20 crc kubenswrapper[4932]: E0218 21:03:20.182162 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:03:24 crc kubenswrapper[4932]: I0218 21:03:24.180426 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:03:24 crc kubenswrapper[4932]: E0218 21:03:24.181665 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:03:26 crc kubenswrapper[4932]: E0218 21:03:26.182531 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:03:27 crc kubenswrapper[4932]: E0218 21:03:27.198413 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:03:28 crc kubenswrapper[4932]: E0218 21:03:28.182334 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:03:31 crc kubenswrapper[4932]: E0218 21:03:31.183409 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:03:38 crc kubenswrapper[4932]: I0218 21:03:38.180863 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:03:38 crc kubenswrapper[4932]: E0218 21:03:38.181985 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:03:38 crc kubenswrapper[4932]: E0218 21:03:38.182388 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:03:40 crc kubenswrapper[4932]: E0218 21:03:40.181009 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:03:42 crc kubenswrapper[4932]: E0218 21:03:42.181576 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:03:42 crc kubenswrapper[4932]: E0218 21:03:42.181815 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:03:49 crc kubenswrapper[4932]: I0218 21:03:49.183236 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:03:49 crc kubenswrapper[4932]: E0218 21:03:49.184399 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:03:50 crc kubenswrapper[4932]: E0218 21:03:50.182277 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:03:53 crc kubenswrapper[4932]: E0218 21:03:53.185064 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:03:53 crc kubenswrapper[4932]: E0218 21:03:53.187550 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:03:54 crc kubenswrapper[4932]: E0218 21:03:54.181952 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:04:03 crc kubenswrapper[4932]: I0218 21:04:03.181309 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:04:03 crc kubenswrapper[4932]: E0218 21:04:03.182455 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:04:04 crc kubenswrapper[4932]: E0218 21:04:04.182254 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:04:05 crc kubenswrapper[4932]: E0218 21:04:05.181243 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:04:06 crc kubenswrapper[4932]: E0218 21:04:06.184402 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:04:07 crc kubenswrapper[4932]: E0218 21:04:07.201006 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:04:14 crc kubenswrapper[4932]: I0218 21:04:14.180740 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:04:14 crc kubenswrapper[4932]: E0218 21:04:14.181889 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:04:17 crc kubenswrapper[4932]: E0218 21:04:17.197238 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:04:17 crc kubenswrapper[4932]: I0218 21:04:17.197474 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 21:04:18 crc kubenswrapper[4932]: E0218 21:04:18.182471 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:04:20 crc kubenswrapper[4932]: E0218 21:04:20.180918 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:04:25 crc kubenswrapper[4932]: I0218 21:04:25.180087 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:04:25 crc kubenswrapper[4932]: E0218 21:04:25.180892 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:04:25 crc kubenswrapper[4932]: I0218 21:04:25.644207 4932 generic.go:334] "Generic (PLEG): container finished" podID="2947758a-fd4b-4a4a-956a-41fefa7296a0" containerID="b8cc57bfeb38d618854d30ad2a0303534b4a0674c797bd1d7dcd4db1e8159186" exitCode=0 Feb 18 21:04:25 crc kubenswrapper[4932]: I0218 21:04:25.644326 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2947758a-fd4b-4a4a-956a-41fefa7296a0","Type":"ContainerDied","Data":"b8cc57bfeb38d618854d30ad2a0303534b4a0674c797bd1d7dcd4db1e8159186"} Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.125574 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.236237 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2947758a-fd4b-4a4a-956a-41fefa7296a0-config-data\") pod \"2947758a-fd4b-4a4a-956a-41fefa7296a0\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.236343 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"2947758a-fd4b-4a4a-956a-41fefa7296a0\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.236389 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2947758a-fd4b-4a4a-956a-41fefa7296a0-test-operator-ephemeral-temporary\") pod \"2947758a-fd4b-4a4a-956a-41fefa7296a0\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.236430 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-ca-certs\") pod \"2947758a-fd4b-4a4a-956a-41fefa7296a0\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.236505 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-ssh-key\") pod \"2947758a-fd4b-4a4a-956a-41fefa7296a0\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.236547 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-openstack-config-secret\") pod \"2947758a-fd4b-4a4a-956a-41fefa7296a0\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.236604 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2947758a-fd4b-4a4a-956a-41fefa7296a0-test-operator-ephemeral-workdir\") pod \"2947758a-fd4b-4a4a-956a-41fefa7296a0\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.236635 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2947758a-fd4b-4a4a-956a-41fefa7296a0-openstack-config\") pod \"2947758a-fd4b-4a4a-956a-41fefa7296a0\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.236671 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fhls\" (UniqueName: \"kubernetes.io/projected/2947758a-fd4b-4a4a-956a-41fefa7296a0-kube-api-access-7fhls\") pod \"2947758a-fd4b-4a4a-956a-41fefa7296a0\" (UID: \"2947758a-fd4b-4a4a-956a-41fefa7296a0\") " Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.238568 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2947758a-fd4b-4a4a-956a-41fefa7296a0-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "2947758a-fd4b-4a4a-956a-41fefa7296a0" (UID: "2947758a-fd4b-4a4a-956a-41fefa7296a0"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.238728 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2947758a-fd4b-4a4a-956a-41fefa7296a0-config-data" (OuterVolumeSpecName: "config-data") pod "2947758a-fd4b-4a4a-956a-41fefa7296a0" (UID: "2947758a-fd4b-4a4a-956a-41fefa7296a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.244148 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2947758a-fd4b-4a4a-956a-41fefa7296a0-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "2947758a-fd4b-4a4a-956a-41fefa7296a0" (UID: "2947758a-fd4b-4a4a-956a-41fefa7296a0"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.258333 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "test-operator-logs") pod "2947758a-fd4b-4a4a-956a-41fefa7296a0" (UID: "2947758a-fd4b-4a4a-956a-41fefa7296a0"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.258326 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2947758a-fd4b-4a4a-956a-41fefa7296a0-kube-api-access-7fhls" (OuterVolumeSpecName: "kube-api-access-7fhls") pod "2947758a-fd4b-4a4a-956a-41fefa7296a0" (UID: "2947758a-fd4b-4a4a-956a-41fefa7296a0"). InnerVolumeSpecName "kube-api-access-7fhls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.286009 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2947758a-fd4b-4a4a-956a-41fefa7296a0" (UID: "2947758a-fd4b-4a4a-956a-41fefa7296a0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.288301 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "2947758a-fd4b-4a4a-956a-41fefa7296a0" (UID: "2947758a-fd4b-4a4a-956a-41fefa7296a0"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.302650 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "2947758a-fd4b-4a4a-956a-41fefa7296a0" (UID: "2947758a-fd4b-4a4a-956a-41fefa7296a0"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.308210 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2947758a-fd4b-4a4a-956a-41fefa7296a0-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "2947758a-fd4b-4a4a-956a-41fefa7296a0" (UID: "2947758a-fd4b-4a4a-956a-41fefa7296a0"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 21:04:27 crc kubenswrapper[4932]: E0218 21:04:27.320908 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: pinging container registry quay.io: received unexpected HTTP status: 502 Bad Gateway" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 21:04:27 crc kubenswrapper[4932]: E0218 21:04:27.321126 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsc85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jhb45_openshift-marketplace(1a69dedd-7666-4739-af80-59d37eedf9b1): ErrImagePull: initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: pinging container registry quay.io: received unexpected HTTP status: 502 Bad Gateway" logger="UnhandledError" Feb 18 21:04:27 crc kubenswrapper[4932]: E0218 21:04:27.322464 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: pinging container registry quay.io: received unexpected HTTP status: 502 Bad Gateway\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.339094 4932 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2947758a-fd4b-4a4a-956a-41fefa7296a0-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.339129 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fhls\" (UniqueName: \"kubernetes.io/projected/2947758a-fd4b-4a4a-956a-41fefa7296a0-kube-api-access-7fhls\") on node \"crc\" DevicePath \"\"" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.339140 4932 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2947758a-fd4b-4a4a-956a-41fefa7296a0-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.339159 4932 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.339185 4932 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2947758a-fd4b-4a4a-956a-41fefa7296a0-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.339196 4932 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.339205 4932 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.339213 4932 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2947758a-fd4b-4a4a-956a-41fefa7296a0-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.339222 4932 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2947758a-fd4b-4a4a-956a-41fefa7296a0-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.360216 4932 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.441700 4932 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.671537 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2947758a-fd4b-4a4a-956a-41fefa7296a0","Type":"ContainerDied","Data":"8c283e884b8f80bf01f3a12151451c0769e806057cd6f8d4c57d644f30012eb1"} Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.671599 4932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c283e884b8f80bf01f3a12151451c0769e806057cd6f8d4c57d644f30012eb1" Feb 18 21:04:27 crc kubenswrapper[4932]: I0218 21:04:27.671637 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 21:04:30 crc kubenswrapper[4932]: E0218 21:04:30.181875 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:04:31 crc kubenswrapper[4932]: E0218 21:04:31.183689 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:04:32 crc kubenswrapper[4932]: I0218 21:04:32.348041 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 21:04:32 crc kubenswrapper[4932]: E0218 21:04:32.348719 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b3cde9-8940-4757-8073-9f90910d6a30" containerName="keystone-cron" Feb 18 21:04:32 crc kubenswrapper[4932]: I0218 21:04:32.348761 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b3cde9-8940-4757-8073-9f90910d6a30" containerName="keystone-cron" Feb 18 21:04:32 crc kubenswrapper[4932]: E0218 21:04:32.348792 4932 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2947758a-fd4b-4a4a-956a-41fefa7296a0" containerName="tempest-tests-tempest-tests-runner" Feb 18 21:04:32 crc kubenswrapper[4932]: I0218 21:04:32.348801 4932 state_mem.go:107] "Deleted CPUSet assignment" podUID="2947758a-fd4b-4a4a-956a-41fefa7296a0" containerName="tempest-tests-tempest-tests-runner" Feb 18 21:04:32 crc kubenswrapper[4932]: I0218 21:04:32.349261 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="2947758a-fd4b-4a4a-956a-41fefa7296a0" containerName="tempest-tests-tempest-tests-runner" Feb 18 21:04:32 crc kubenswrapper[4932]: I0218 21:04:32.349321 4932 memory_manager.go:354] "RemoveStaleState removing state" podUID="39b3cde9-8940-4757-8073-9f90910d6a30" containerName="keystone-cron" Feb 18 21:04:32 crc kubenswrapper[4932]: I0218 21:04:32.350714 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 21:04:32 crc kubenswrapper[4932]: I0218 21:04:32.353627 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-bccj2" Feb 18 21:04:32 crc kubenswrapper[4932]: I0218 21:04:32.361461 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 21:04:32 crc kubenswrapper[4932]: I0218 21:04:32.477423 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"93c5c98c-cc87-4938-982e-54d3e1663dda\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 21:04:32 crc kubenswrapper[4932]: I0218 21:04:32.477508 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4dqq\" (UniqueName: \"kubernetes.io/projected/93c5c98c-cc87-4938-982e-54d3e1663dda-kube-api-access-z4dqq\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"93c5c98c-cc87-4938-982e-54d3e1663dda\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 21:04:32 crc kubenswrapper[4932]: I0218 21:04:32.579413 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"93c5c98c-cc87-4938-982e-54d3e1663dda\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 21:04:32 crc kubenswrapper[4932]: I0218 21:04:32.579508 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4dqq\" (UniqueName: \"kubernetes.io/projected/93c5c98c-cc87-4938-982e-54d3e1663dda-kube-api-access-z4dqq\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"93c5c98c-cc87-4938-982e-54d3e1663dda\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 21:04:32 crc kubenswrapper[4932]: I0218 21:04:32.580573 4932 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"93c5c98c-cc87-4938-982e-54d3e1663dda\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 21:04:32 crc kubenswrapper[4932]: I0218 21:04:32.609603 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4dqq\" (UniqueName: \"kubernetes.io/projected/93c5c98c-cc87-4938-982e-54d3e1663dda-kube-api-access-z4dqq\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"93c5c98c-cc87-4938-982e-54d3e1663dda\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 21:04:32 crc kubenswrapper[4932]: I0218 21:04:32.632497 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"93c5c98c-cc87-4938-982e-54d3e1663dda\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 21:04:32 crc kubenswrapper[4932]: I0218 21:04:32.678043 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 21:04:33 crc kubenswrapper[4932]: E0218 21:04:33.181768 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:04:33 crc kubenswrapper[4932]: I0218 21:04:33.219049 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 21:04:33 crc kubenswrapper[4932]: I0218 21:04:33.752036 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"93c5c98c-cc87-4938-982e-54d3e1663dda","Type":"ContainerStarted","Data":"4d89e3959819723e86718f91888f7bdd71db7dc81fc96cabec42c9f19d6ae047"} Feb 18 21:04:36 crc kubenswrapper[4932]: I0218 21:04:36.180536 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:04:36 crc kubenswrapper[4932]: I0218 21:04:36.794239 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"ede191f8512d552532e9e192938bd11d4065e409a6f674bff66d945fc49b0e49"} Feb 18 21:04:38 crc kubenswrapper[4932]: E0218 21:04:38.181236 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:04:41 crc kubenswrapper[4932]: E0218 21:04:41.184421 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:04:43 crc kubenswrapper[4932]: E0218 21:04:43.183061 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:04:48 crc kubenswrapper[4932]: E0218 21:04:48.182486 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:04:52 crc kubenswrapper[4932]: E0218 21:04:52.181164 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:04:53 crc kubenswrapper[4932]: E0218 21:04:53.180941 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:04:53 crc kubenswrapper[4932]: E0218 21:04:53.767241 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" image="quay.io/quay/busybox:latest" Feb 18 21:04:53 crc kubenswrapper[4932]: E0218 21:04:53.767750 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:test-operator-logs-container,Image:quay.io/quay/busybox,Command:[sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs-volume-0,ReadOnly:false,MountPath:/mnt/logs-tempest-tests-tempest-step-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z4dqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-logs-pod-tempest-tempest-tests-tempest_openstack(93c5c98c-cc87-4938-982e-54d3e1663dda): ErrImagePull: initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" logger="UnhandledError" Feb 18 21:04:53 crc kubenswrapper[4932]: E0218 21:04:53.769025 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ErrImagePull: \"initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:04:53 crc kubenswrapper[4932]: E0218 21:04:53.994539 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:04:54 crc kubenswrapper[4932]: E0218 21:04:54.182519 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:05:01 crc kubenswrapper[4932]: E0218 21:05:01.182628 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:05:04 crc kubenswrapper[4932]: E0218 21:05:04.183675 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:05:08 crc kubenswrapper[4932]: E0218 21:05:08.182656 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:05:09 crc kubenswrapper[4932]: E0218 21:05:09.184931 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:05:13 crc kubenswrapper[4932]: E0218 21:05:13.183761 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:05:18 crc kubenswrapper[4932]: E0218 21:05:18.183311 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:05:23 crc kubenswrapper[4932]: E0218 21:05:23.202410 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:05:23 crc kubenswrapper[4932]: E0218 21:05:23.202420 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:05:28 crc kubenswrapper[4932]: E0218 21:05:28.182224 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:05:29 crc kubenswrapper[4932]: E0218 21:05:29.293445 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" image="quay.io/quay/busybox:latest" Feb 18 21:05:29 crc kubenswrapper[4932]: E0218 21:05:29.293963 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:test-operator-logs-container,Image:quay.io/quay/busybox,Command:[sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs-volume-0,ReadOnly:false,MountPath:/mnt/logs-tempest-tests-tempest-step-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z4dqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-logs-pod-tempest-tempest-tests-tempest_openstack(93c5c98c-cc87-4938-982e-54d3e1663dda): ErrImagePull: initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" logger="UnhandledError" Feb 18 21:05:29 crc kubenswrapper[4932]: E0218 21:05:29.295334 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ErrImagePull: \"initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:05:31 crc kubenswrapper[4932]: E0218 21:05:31.183165 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:05:35 crc kubenswrapper[4932]: E0218 21:05:35.184545 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:05:37 crc kubenswrapper[4932]: E0218 21:05:37.199037 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:05:40 crc kubenswrapper[4932]: E0218 21:05:40.181760 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:05:44 crc kubenswrapper[4932]: E0218 21:05:44.183720 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:05:45 crc kubenswrapper[4932]: E0218 21:05:45.181734 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:05:47 crc kubenswrapper[4932]: E0218 21:05:47.193886 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:05:49 crc kubenswrapper[4932]: E0218 21:05:49.181904 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:05:55 crc kubenswrapper[4932]: E0218 21:05:55.186487 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:06:00 crc kubenswrapper[4932]: E0218 21:06:00.181161 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:06:01 crc kubenswrapper[4932]: E0218 21:06:01.188278 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:06:02 crc kubenswrapper[4932]: E0218 21:06:02.181531 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:06:07 crc kubenswrapper[4932]: E0218 21:06:07.194216 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:06:12 crc kubenswrapper[4932]: E0218 21:06:12.336050 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" image="quay.io/quay/busybox:latest" Feb 18 21:06:12 crc kubenswrapper[4932]: E0218 21:06:12.337208 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:test-operator-logs-container,Image:quay.io/quay/busybox,Command:[sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs-volume-0,ReadOnly:false,MountPath:/mnt/logs-tempest-tests-tempest-step-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z4dqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-logs-pod-tempest-tempest-tests-tempest_openstack(93c5c98c-cc87-4938-982e-54d3e1663dda): ErrImagePull: initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" logger="UnhandledError" Feb 18 21:06:12 crc kubenswrapper[4932]: E0218 21:06:12.338539 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ErrImagePull: \"initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:06:14 crc kubenswrapper[4932]: E0218 21:06:14.181991 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:06:16 crc kubenswrapper[4932]: E0218 21:06:16.183059 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:06:19 crc kubenswrapper[4932]: E0218 21:06:19.182745 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:06:23 crc kubenswrapper[4932]: E0218 21:06:23.545076 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: reading manifest v4.18 in registry.redhat.io/redhat/community-operator-index: received unexpected HTTP status: 504 Gateway Timeout" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 21:06:23 crc kubenswrapper[4932]: E0218 21:06:23.545785 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xz4sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8slcg_openshift-marketplace(57dbf2a4-5676-4291-911d-00038d3c7c75): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: reading manifest v4.18 in registry.redhat.io/redhat/community-operator-index: received unexpected HTTP status: 504 Gateway Timeout" logger="UnhandledError" Feb 18 21:06:23 crc kubenswrapper[4932]: E0218 21:06:23.547127 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: reading manifest v4.18 in registry.redhat.io/redhat/community-operator-index: received unexpected HTTP status: 504 Gateway Timeout\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:06:25 crc kubenswrapper[4932]: E0218 21:06:25.182501 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:06:27 crc kubenswrapper[4932]: E0218 21:06:27.193744 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:06:30 crc kubenswrapper[4932]: E0218 21:06:30.182553 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:06:33 crc kubenswrapper[4932]: E0218 21:06:33.184269 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:06:36 crc kubenswrapper[4932]: E0218 21:06:36.186319 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:06:39 crc kubenswrapper[4932]: E0218 21:06:39.182758 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:06:40 crc kubenswrapper[4932]: E0218 21:06:40.182029 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:06:41 crc kubenswrapper[4932]: E0218 21:06:41.181754 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:06:46 crc kubenswrapper[4932]: E0218 21:06:46.183016 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:06:50 crc kubenswrapper[4932]: E0218 21:06:50.182199 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:06:51 crc kubenswrapper[4932]: E0218 21:06:51.195660 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:06:51 crc kubenswrapper[4932]: E0218 21:06:51.196495 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:06:56 crc kubenswrapper[4932]: E0218 21:06:56.184104 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:06:57 crc kubenswrapper[4932]: I0218 21:06:57.605794 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 21:06:57 crc kubenswrapper[4932]: I0218 21:06:57.606370 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 21:06:58 crc kubenswrapper[4932]: E0218 21:06:58.182688 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:07:02 crc kubenswrapper[4932]: E0218 21:07:02.181974 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:07:03 crc kubenswrapper[4932]: E0218 21:07:03.183379 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:07:07 crc kubenswrapper[4932]: E0218 21:07:07.191635 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:07:10 crc kubenswrapper[4932]: E0218 21:07:10.184446 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:07:14 crc kubenswrapper[4932]: E0218 21:07:14.184068 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:07:15 crc kubenswrapper[4932]: I0218 21:07:15.367716 4932 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qrwmf/must-gather-8wp6g"] Feb 18 21:07:15 crc kubenswrapper[4932]: I0218 21:07:15.369637 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qrwmf/must-gather-8wp6g" Feb 18 21:07:15 crc kubenswrapper[4932]: I0218 21:07:15.371325 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qrwmf"/"kube-root-ca.crt" Feb 18 21:07:15 crc kubenswrapper[4932]: I0218 21:07:15.371865 4932 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qrwmf"/"default-dockercfg-kwmlf" Feb 18 21:07:15 crc kubenswrapper[4932]: I0218 21:07:15.372061 4932 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qrwmf"/"openshift-service-ca.crt" Feb 18 21:07:15 crc kubenswrapper[4932]: I0218 21:07:15.381569 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qrwmf/must-gather-8wp6g"] Feb 18 21:07:15 crc kubenswrapper[4932]: I0218 21:07:15.471098 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dcf83976-1f0c-4cf7-91d8-3f0def01fe46-must-gather-output\") pod \"must-gather-8wp6g\" (UID: \"dcf83976-1f0c-4cf7-91d8-3f0def01fe46\") " pod="openshift-must-gather-qrwmf/must-gather-8wp6g" Feb 18 21:07:15 crc kubenswrapper[4932]: I0218 21:07:15.471702 4932 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9pzg\" (UniqueName: \"kubernetes.io/projected/dcf83976-1f0c-4cf7-91d8-3f0def01fe46-kube-api-access-l9pzg\") pod \"must-gather-8wp6g\" (UID: \"dcf83976-1f0c-4cf7-91d8-3f0def01fe46\") " pod="openshift-must-gather-qrwmf/must-gather-8wp6g" Feb 18 21:07:15 crc kubenswrapper[4932]: I0218 21:07:15.573078 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9pzg\" (UniqueName: \"kubernetes.io/projected/dcf83976-1f0c-4cf7-91d8-3f0def01fe46-kube-api-access-l9pzg\") pod \"must-gather-8wp6g\" (UID: \"dcf83976-1f0c-4cf7-91d8-3f0def01fe46\") " pod="openshift-must-gather-qrwmf/must-gather-8wp6g" Feb 18 21:07:15 crc kubenswrapper[4932]: I0218 21:07:15.573206 4932 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dcf83976-1f0c-4cf7-91d8-3f0def01fe46-must-gather-output\") pod \"must-gather-8wp6g\" (UID: \"dcf83976-1f0c-4cf7-91d8-3f0def01fe46\") " pod="openshift-must-gather-qrwmf/must-gather-8wp6g" Feb 18 21:07:15 crc kubenswrapper[4932]: I0218 21:07:15.573767 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dcf83976-1f0c-4cf7-91d8-3f0def01fe46-must-gather-output\") pod \"must-gather-8wp6g\" (UID: \"dcf83976-1f0c-4cf7-91d8-3f0def01fe46\") " pod="openshift-must-gather-qrwmf/must-gather-8wp6g" Feb 18 21:07:15 crc kubenswrapper[4932]: I0218 21:07:15.592813 4932 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9pzg\" (UniqueName: \"kubernetes.io/projected/dcf83976-1f0c-4cf7-91d8-3f0def01fe46-kube-api-access-l9pzg\") pod \"must-gather-8wp6g\" (UID: \"dcf83976-1f0c-4cf7-91d8-3f0def01fe46\") " pod="openshift-must-gather-qrwmf/must-gather-8wp6g" Feb 18 21:07:15 crc kubenswrapper[4932]: I0218 21:07:15.699186 4932 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qrwmf/must-gather-8wp6g" Feb 18 21:07:16 crc kubenswrapper[4932]: I0218 21:07:16.224772 4932 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qrwmf/must-gather-8wp6g"] Feb 18 21:07:16 crc kubenswrapper[4932]: I0218 21:07:16.834381 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qrwmf/must-gather-8wp6g" event={"ID":"dcf83976-1f0c-4cf7-91d8-3f0def01fe46","Type":"ContainerStarted","Data":"258a813aa68937edd52a807f5c0f7e594bcd8054419de7e27604d62fdf4c4a65"} Feb 18 21:07:17 crc kubenswrapper[4932]: E0218 21:07:17.196920 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:07:19 crc kubenswrapper[4932]: E0218 21:07:19.213449 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:07:21 crc kubenswrapper[4932]: E0218 21:07:21.182667 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:07:22 crc kubenswrapper[4932]: E0218 21:07:22.295114 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" image="quay.io/quay/busybox:latest" Feb 18 21:07:22 crc kubenswrapper[4932]: E0218 21:07:22.296673 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:test-operator-logs-container,Image:quay.io/quay/busybox,Command:[sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs-volume-0,ReadOnly:false,MountPath:/mnt/logs-tempest-tests-tempest-step-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z4dqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-logs-pod-tempest-tempest-tests-tempest_openstack(93c5c98c-cc87-4938-982e-54d3e1663dda): ErrImagePull: initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" logger="UnhandledError" Feb 18 21:07:22 crc kubenswrapper[4932]: E0218 21:07:22.297916 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ErrImagePull: \"initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:07:27 crc kubenswrapper[4932]: E0218 21:07:27.200337 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:07:27 crc kubenswrapper[4932]: I0218 21:07:27.606591 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 21:07:27 crc kubenswrapper[4932]: I0218 21:07:27.606947 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 21:07:34 crc kubenswrapper[4932]: E0218 21:07:34.182324 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:07:34 crc kubenswrapper[4932]: E0218 21:07:34.182365 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:07:36 crc kubenswrapper[4932]: E0218 21:07:36.182962 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:07:36 crc kubenswrapper[4932]: E0218 21:07:36.338876 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/openstack-k8s-operators/openstack-must-gather:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" image="quay.io/openstack-k8s-operators/openstack-must-gather:latest" Feb 18 21:07:36 crc kubenswrapper[4932]: E0218 21:07:36.339148 4932 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 18 21:07:36 crc kubenswrapper[4932]: container &Container{Name:gather,Image:quay.io/openstack-k8s-operators/openstack-must-gather:latest,Command:[/bin/bash -c if command -v setsid >/dev/null 2>&1 && command -v ps >/dev/null 2>&1 && command -v pkill >/dev/null 2>&1; then Feb 18 21:07:36 crc kubenswrapper[4932]: HAVE_SESSION_TOOLS=true Feb 18 21:07:36 crc kubenswrapper[4932]: else Feb 18 21:07:36 crc kubenswrapper[4932]: HAVE_SESSION_TOOLS=false Feb 18 21:07:36 crc kubenswrapper[4932]: fi Feb 18 21:07:36 crc kubenswrapper[4932]: Feb 18 21:07:36 crc kubenswrapper[4932]: Feb 18 21:07:36 crc kubenswrapper[4932]: echo "[disk usage checker] Started" Feb 18 21:07:36 crc kubenswrapper[4932]: target_dir="/must-gather" Feb 18 21:07:36 crc kubenswrapper[4932]: usage_percentage_limit="80" Feb 18 21:07:36 crc kubenswrapper[4932]: while true; do Feb 18 21:07:36 crc kubenswrapper[4932]: usage_percentage=$(df -P "$target_dir" | awk 'NR==2 {print $5}' | sed 's/%//') Feb 18 21:07:36 crc kubenswrapper[4932]: echo "[disk usage checker] Volume usage percentage: current = ${usage_percentage} ; allowed = ${usage_percentage_limit}" Feb 18 21:07:36 crc kubenswrapper[4932]: if [ "$usage_percentage" -gt "$usage_percentage_limit" ]; then Feb 18 21:07:36 crc kubenswrapper[4932]: echo "[disk usage checker] Disk usage exceeds the volume percentage of ${usage_percentage_limit} for mounted directory, terminating..." Feb 18 21:07:36 crc kubenswrapper[4932]: if [ "$HAVE_SESSION_TOOLS" = "true" ]; then Feb 18 21:07:36 crc kubenswrapper[4932]: ps -o sess --no-headers | sort -u | while read sid; do Feb 18 21:07:36 crc kubenswrapper[4932]: [[ "$sid" -eq "${$}" ]] && continue Feb 18 21:07:36 crc kubenswrapper[4932]: pkill --signal SIGKILL --session "$sid" Feb 18 21:07:36 crc kubenswrapper[4932]: done Feb 18 21:07:36 crc kubenswrapper[4932]: else Feb 18 21:07:36 crc kubenswrapper[4932]: kill 0 Feb 18 21:07:36 crc kubenswrapper[4932]: fi Feb 18 21:07:36 crc kubenswrapper[4932]: exit 1 Feb 18 21:07:36 crc kubenswrapper[4932]: fi Feb 18 21:07:36 crc kubenswrapper[4932]: sleep 5 Feb 18 21:07:36 crc kubenswrapper[4932]: done & if [ "$HAVE_SESSION_TOOLS" = "true" ]; then Feb 18 21:07:36 crc kubenswrapper[4932]: setsid -w bash <<-MUSTGATHER_EOF Feb 18 21:07:36 crc kubenswrapper[4932]: ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=ALL SOS_EDPM=all OMC=False SOS_DECOMPRESS=0 gather Feb 18 21:07:36 crc kubenswrapper[4932]: MUSTGATHER_EOF Feb 18 21:07:36 crc kubenswrapper[4932]: else Feb 18 21:07:36 crc kubenswrapper[4932]: ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=ALL SOS_EDPM=all OMC=False SOS_DECOMPRESS=0 gather Feb 18 21:07:36 crc kubenswrapper[4932]: fi; sync && echo 'Caches written to disk'],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:must-gather-output,ReadOnly:false,MountPath:/must-gather,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9pzg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod must-gather-8wp6g_openshift-must-gather-qrwmf(dcf83976-1f0c-4cf7-91d8-3f0def01fe46): ErrImagePull: initializing source docker://quay.io/openstack-k8s-operators/openstack-must-gather:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out Feb 18 21:07:36 crc kubenswrapper[4932]: > logger="UnhandledError" Feb 18 21:07:36 crc kubenswrapper[4932]: E0218 21:07:36.350666 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"gather\" with ErrImagePull: \"initializing source docker://quay.io/openstack-k8s-operators/openstack-must-gather:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out\", failed to \"StartContainer\" for \"copy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\"]" pod="openshift-must-gather-qrwmf/must-gather-8wp6g" podUID="dcf83976-1f0c-4cf7-91d8-3f0def01fe46" Feb 18 21:07:37 crc kubenswrapper[4932]: E0218 21:07:37.092484 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"gather\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\", failed to \"StartContainer\" for \"copy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\"]" pod="openshift-must-gather-qrwmf/must-gather-8wp6g" podUID="dcf83976-1f0c-4cf7-91d8-3f0def01fe46" Feb 18 21:07:39 crc kubenswrapper[4932]: E0218 21:07:39.182429 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:07:40 crc kubenswrapper[4932]: E0218 21:07:40.365083 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: reading manifest v4.18 in registry.redhat.io/redhat/redhat-operator-index: received unexpected HTTP status: 504 Gateway Timeout" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 21:07:40 crc kubenswrapper[4932]: E0218 21:07:40.365364 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r77nm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pkmxd_openshift-marketplace(9c30675a-a3c0-497c-804a-42c3640846eb): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: reading manifest v4.18 in registry.redhat.io/redhat/redhat-operator-index: received unexpected HTTP status: 504 Gateway Timeout" logger="UnhandledError" Feb 18 21:07:40 crc kubenswrapper[4932]: E0218 21:07:40.366635 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: reading manifest v4.18 in registry.redhat.io/redhat/redhat-operator-index: received unexpected HTTP status: 504 Gateway Timeout\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:07:45 crc kubenswrapper[4932]: E0218 21:07:45.184047 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:07:50 crc kubenswrapper[4932]: E0218 21:07:50.180677 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:07:51 crc kubenswrapper[4932]: I0218 21:07:51.104941 4932 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qrwmf/must-gather-8wp6g"] Feb 18 21:07:51 crc kubenswrapper[4932]: I0218 21:07:51.116240 4932 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qrwmf/must-gather-8wp6g"] Feb 18 21:07:51 crc kubenswrapper[4932]: E0218 21:07:51.184194 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:07:51 crc kubenswrapper[4932]: I0218 21:07:51.592865 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qrwmf/must-gather-8wp6g" Feb 18 21:07:51 crc kubenswrapper[4932]: I0218 21:07:51.724823 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dcf83976-1f0c-4cf7-91d8-3f0def01fe46-must-gather-output\") pod \"dcf83976-1f0c-4cf7-91d8-3f0def01fe46\" (UID: \"dcf83976-1f0c-4cf7-91d8-3f0def01fe46\") " Feb 18 21:07:51 crc kubenswrapper[4932]: I0218 21:07:51.725204 4932 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9pzg\" (UniqueName: \"kubernetes.io/projected/dcf83976-1f0c-4cf7-91d8-3f0def01fe46-kube-api-access-l9pzg\") pod \"dcf83976-1f0c-4cf7-91d8-3f0def01fe46\" (UID: \"dcf83976-1f0c-4cf7-91d8-3f0def01fe46\") " Feb 18 21:07:51 crc kubenswrapper[4932]: I0218 21:07:51.725244 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcf83976-1f0c-4cf7-91d8-3f0def01fe46-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "dcf83976-1f0c-4cf7-91d8-3f0def01fe46" (UID: "dcf83976-1f0c-4cf7-91d8-3f0def01fe46"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 21:07:51 crc kubenswrapper[4932]: I0218 21:07:51.726082 4932 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dcf83976-1f0c-4cf7-91d8-3f0def01fe46-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 18 21:07:51 crc kubenswrapper[4932]: I0218 21:07:51.730434 4932 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcf83976-1f0c-4cf7-91d8-3f0def01fe46-kube-api-access-l9pzg" (OuterVolumeSpecName: "kube-api-access-l9pzg") pod "dcf83976-1f0c-4cf7-91d8-3f0def01fe46" (UID: "dcf83976-1f0c-4cf7-91d8-3f0def01fe46"). InnerVolumeSpecName "kube-api-access-l9pzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 21:07:51 crc kubenswrapper[4932]: I0218 21:07:51.828972 4932 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9pzg\" (UniqueName: \"kubernetes.io/projected/dcf83976-1f0c-4cf7-91d8-3f0def01fe46-kube-api-access-l9pzg\") on node \"crc\" DevicePath \"\"" Feb 18 21:07:52 crc kubenswrapper[4932]: I0218 21:07:52.293142 4932 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qrwmf/must-gather-8wp6g" Feb 18 21:07:53 crc kubenswrapper[4932]: I0218 21:07:53.192732 4932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcf83976-1f0c-4cf7-91d8-3f0def01fe46" path="/var/lib/kubelet/pods/dcf83976-1f0c-4cf7-91d8-3f0def01fe46/volumes" Feb 18 21:07:54 crc kubenswrapper[4932]: E0218 21:07:54.181944 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:07:57 crc kubenswrapper[4932]: I0218 21:07:57.606782 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 21:07:57 crc kubenswrapper[4932]: I0218 21:07:57.607479 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 21:07:57 crc kubenswrapper[4932]: I0218 21:07:57.607549 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 21:07:57 crc kubenswrapper[4932]: I0218 21:07:57.608771 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ede191f8512d552532e9e192938bd11d4065e409a6f674bff66d945fc49b0e49"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 21:07:57 crc kubenswrapper[4932]: I0218 21:07:57.608873 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://ede191f8512d552532e9e192938bd11d4065e409a6f674bff66d945fc49b0e49" gracePeriod=600 Feb 18 21:07:58 crc kubenswrapper[4932]: E0218 21:07:58.179843 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:07:58 crc kubenswrapper[4932]: I0218 21:07:58.359391 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="ede191f8512d552532e9e192938bd11d4065e409a6f674bff66d945fc49b0e49" exitCode=0 Feb 18 21:07:58 crc kubenswrapper[4932]: I0218 21:07:58.359437 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"ede191f8512d552532e9e192938bd11d4065e409a6f674bff66d945fc49b0e49"} Feb 18 21:07:58 crc kubenswrapper[4932]: I0218 21:07:58.359465 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerStarted","Data":"2c6da2192fb9b773cfaec0ea03bb87c5491c12cdc84ec53587f6bc5832c8ffa4"} Feb 18 21:07:58 crc kubenswrapper[4932]: I0218 21:07:58.359480 4932 scope.go:117] "RemoveContainer" containerID="a0ea0959275531d67a491891af1a48e93cf4febf9aee1f4db03381a13d69ee89" Feb 18 21:08:01 crc kubenswrapper[4932]: E0218 21:08:01.183625 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:08:05 crc kubenswrapper[4932]: E0218 21:08:05.182105 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:08:06 crc kubenswrapper[4932]: E0218 21:08:06.183341 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:08:07 crc kubenswrapper[4932]: E0218 21:08:07.332033 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 21:08:07 crc kubenswrapper[4932]: E0218 21:08:07.332539 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:20MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9l46c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-s2grr_openshift-marketplace(088aaa53-25ca-48c3-a904-2af0f07e8c2b): ErrImagePull: initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" logger="UnhandledError" Feb 18 21:08:07 crc kubenswrapper[4932]: E0218 21:08:07.333876 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:08:11 crc kubenswrapper[4932]: E0218 21:08:11.183995 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:08:12 crc kubenswrapper[4932]: E0218 21:08:12.181559 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:08:16 crc kubenswrapper[4932]: E0218 21:08:16.183567 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:08:17 crc kubenswrapper[4932]: E0218 21:08:17.188053 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:08:21 crc kubenswrapper[4932]: E0218 21:08:21.187595 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:08:24 crc kubenswrapper[4932]: E0218 21:08:24.184716 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:08:26 crc kubenswrapper[4932]: E0218 21:08:26.181988 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:08:28 crc kubenswrapper[4932]: E0218 21:08:28.188855 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:08:29 crc kubenswrapper[4932]: E0218 21:08:29.181561 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:08:36 crc kubenswrapper[4932]: E0218 21:08:36.180986 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:08:36 crc kubenswrapper[4932]: E0218 21:08:36.181208 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:08:39 crc kubenswrapper[4932]: E0218 21:08:39.182096 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:08:39 crc kubenswrapper[4932]: E0218 21:08:39.182100 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:08:42 crc kubenswrapper[4932]: E0218 21:08:42.182140 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:08:49 crc kubenswrapper[4932]: E0218 21:08:49.183373 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:08:50 crc kubenswrapper[4932]: E0218 21:08:50.181425 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:08:51 crc kubenswrapper[4932]: E0218 21:08:51.182760 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:08:53 crc kubenswrapper[4932]: E0218 21:08:53.183658 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:09:01 crc kubenswrapper[4932]: E0218 21:09:01.185424 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:09:01 crc kubenswrapper[4932]: E0218 21:09:01.186348 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:09:02 crc kubenswrapper[4932]: E0218 21:09:02.181467 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:09:06 crc kubenswrapper[4932]: E0218 21:09:06.182899 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:09:14 crc kubenswrapper[4932]: E0218 21:09:14.183770 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:09:14 crc kubenswrapper[4932]: E0218 21:09:14.184309 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:09:14 crc kubenswrapper[4932]: E0218 21:09:14.300351 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" image="quay.io/quay/busybox:latest" Feb 18 21:09:14 crc kubenswrapper[4932]: E0218 21:09:14.300512 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:test-operator-logs-container,Image:quay.io/quay/busybox,Command:[sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs-volume-0,ReadOnly:false,MountPath:/mnt/logs-tempest-tests-tempest-step-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z4dqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-logs-pod-tempest-tempest-tests-tempest_openstack(93c5c98c-cc87-4938-982e-54d3e1663dda): ErrImagePull: initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" logger="UnhandledError" Feb 18 21:09:14 crc kubenswrapper[4932]: E0218 21:09:14.301779 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ErrImagePull: \"initializing source docker://quay.io/quay/busybox:latest: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:09:17 crc kubenswrapper[4932]: E0218 21:09:17.194523 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:09:17 crc kubenswrapper[4932]: E0218 21:09:17.194737 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:09:26 crc kubenswrapper[4932]: E0218 21:09:26.182143 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:09:26 crc kubenswrapper[4932]: E0218 21:09:26.182701 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:09:27 crc kubenswrapper[4932]: E0218 21:09:27.213624 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:09:28 crc kubenswrapper[4932]: E0218 21:09:28.182520 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:09:31 crc kubenswrapper[4932]: E0218 21:09:31.182944 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:09:38 crc kubenswrapper[4932]: I0218 21:09:38.182398 4932 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 21:09:40 crc kubenswrapper[4932]: E0218 21:09:40.181797 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:09:40 crc kubenswrapper[4932]: E0218 21:09:40.181810 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:09:41 crc kubenswrapper[4932]: E0218 21:09:41.182536 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:09:44 crc kubenswrapper[4932]: E0218 21:09:44.182406 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:09:53 crc kubenswrapper[4932]: E0218 21:09:53.184526 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:09:54 crc kubenswrapper[4932]: E0218 21:09:54.181899 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:09:55 crc kubenswrapper[4932]: E0218 21:09:55.190419 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:09:57 crc kubenswrapper[4932]: I0218 21:09:57.606708 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 21:09:57 crc kubenswrapper[4932]: I0218 21:09:57.607251 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 21:09:58 crc kubenswrapper[4932]: E0218 21:09:58.296353 4932 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 21:09:58 crc kubenswrapper[4932]: E0218 21:09:58.296965 4932 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsc85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jhb45_openshift-marketplace(1a69dedd-7666-4739-af80-59d37eedf9b1): ErrImagePull: initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out" logger="UnhandledError" Feb 18 21:09:58 crc kubenswrapper[4932]: E0218 21:09:58.298283 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: pinging container registry quay.io: received unexpected HTTP status: 504 Gateway Time-out\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:09:59 crc kubenswrapper[4932]: E0218 21:09:59.184999 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:10:05 crc kubenswrapper[4932]: E0218 21:10:05.196477 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:10:05 crc kubenswrapper[4932]: E0218 21:10:05.196919 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:10:07 crc kubenswrapper[4932]: E0218 21:10:07.204059 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:10:11 crc kubenswrapper[4932]: E0218 21:10:11.182525 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:10:12 crc kubenswrapper[4932]: E0218 21:10:12.183062 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:10:16 crc kubenswrapper[4932]: E0218 21:10:16.181314 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:10:17 crc kubenswrapper[4932]: E0218 21:10:17.193593 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:10:22 crc kubenswrapper[4932]: E0218 21:10:22.183781 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:10:23 crc kubenswrapper[4932]: E0218 21:10:23.184500 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:10:25 crc kubenswrapper[4932]: E0218 21:10:25.181134 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:10:27 crc kubenswrapper[4932]: I0218 21:10:27.606808 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 21:10:27 crc kubenswrapper[4932]: I0218 21:10:27.607419 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 21:10:30 crc kubenswrapper[4932]: E0218 21:10:30.181521 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:10:31 crc kubenswrapper[4932]: E0218 21:10:31.182537 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:10:34 crc kubenswrapper[4932]: E0218 21:10:34.181244 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:10:37 crc kubenswrapper[4932]: E0218 21:10:37.197650 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:10:40 crc kubenswrapper[4932]: E0218 21:10:40.183940 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:10:41 crc kubenswrapper[4932]: E0218 21:10:41.181402 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:10:44 crc kubenswrapper[4932]: E0218 21:10:44.185768 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:10:48 crc kubenswrapper[4932]: E0218 21:10:48.183969 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b" Feb 18 21:10:49 crc kubenswrapper[4932]: E0218 21:10:49.182055 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8slcg" podUID="57dbf2a4-5676-4291-911d-00038d3c7c75" Feb 18 21:10:52 crc kubenswrapper[4932]: E0218 21:10:52.181998 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="93c5c98c-cc87-4938-982e-54d3e1663dda" Feb 18 21:10:55 crc kubenswrapper[4932]: E0218 21:10:55.188475 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-jhb45" podUID="1a69dedd-7666-4739-af80-59d37eedf9b1" Feb 18 21:10:55 crc kubenswrapper[4932]: E0218 21:10:55.188550 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkmxd" podUID="9c30675a-a3c0-497c-804a-42c3640846eb" Feb 18 21:10:57 crc kubenswrapper[4932]: I0218 21:10:57.606702 4932 patch_prober.go:28] interesting pod/machine-config-daemon-jf9v4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 21:10:57 crc kubenswrapper[4932]: I0218 21:10:57.608104 4932 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 21:10:57 crc kubenswrapper[4932]: I0218 21:10:57.608233 4932 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" Feb 18 21:10:57 crc kubenswrapper[4932]: I0218 21:10:57.610374 4932 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2c6da2192fb9b773cfaec0ea03bb87c5491c12cdc84ec53587f6bc5832c8ffa4"} pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 21:10:57 crc kubenswrapper[4932]: I0218 21:10:57.610523 4932 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerName="machine-config-daemon" containerID="cri-o://2c6da2192fb9b773cfaec0ea03bb87c5491c12cdc84ec53587f6bc5832c8ffa4" gracePeriod=600 Feb 18 21:10:57 crc kubenswrapper[4932]: E0218 21:10:57.759457 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:10:58 crc kubenswrapper[4932]: I0218 21:10:58.530841 4932 generic.go:334] "Generic (PLEG): container finished" podID="c2740774-23d5-4857-9ac6-f0a01e38a64c" containerID="2c6da2192fb9b773cfaec0ea03bb87c5491c12cdc84ec53587f6bc5832c8ffa4" exitCode=0 Feb 18 21:10:58 crc kubenswrapper[4932]: I0218 21:10:58.530907 4932 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" event={"ID":"c2740774-23d5-4857-9ac6-f0a01e38a64c","Type":"ContainerDied","Data":"2c6da2192fb9b773cfaec0ea03bb87c5491c12cdc84ec53587f6bc5832c8ffa4"} Feb 18 21:10:58 crc kubenswrapper[4932]: I0218 21:10:58.530956 4932 scope.go:117] "RemoveContainer" containerID="ede191f8512d552532e9e192938bd11d4065e409a6f674bff66d945fc49b0e49" Feb 18 21:10:58 crc kubenswrapper[4932]: I0218 21:10:58.533328 4932 scope.go:117] "RemoveContainer" containerID="2c6da2192fb9b773cfaec0ea03bb87c5491c12cdc84ec53587f6bc5832c8ffa4" Feb 18 21:10:58 crc kubenswrapper[4932]: E0218 21:10:58.534071 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf9v4_openshift-machine-config-operator(c2740774-23d5-4857-9ac6-f0a01e38a64c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf9v4" podUID="c2740774-23d5-4857-9ac6-f0a01e38a64c" Feb 18 21:11:00 crc kubenswrapper[4932]: E0218 21:11:00.182793 4932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2grr" podUID="088aaa53-25ca-48c3-a904-2af0f07e8c2b"